Sample records for camera coordinate system

  1. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  2. Calibration of a dual-PTZ camera system for stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  3. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  4. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  5. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  6. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  7. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  8. Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system

    NASA Technical Reports Server (NTRS)

    Stramler, J. H., Jr.; Woolford, B. J.

    1983-01-01

    The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.

  9. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  10. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  11. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  12. A three dimensional point cloud registration method based on rotation matrix eigenvalue

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui

    2017-09-01

    We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.

  13. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  14. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  15. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  16. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  17. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  18. Using the auxiliary camera for system calibration of 3D measurement by digital speckle

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2014-06-01

    The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.

  19. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  20. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  3. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  4. Development of a table tennis robot for ball interception using visual feedback

    NASA Astrophysics Data System (ADS)

    Parnichkun, Manukid; Thalagoda, Janitha A.

    2016-07-01

    This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.

  5. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  6. A new method to acquire 3-D images of a dental cast

    NASA Astrophysics Data System (ADS)

    Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan

    2006-01-01

    This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.

  7. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.

  8. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  9. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  10. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  11. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  12. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    PubMed

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  13. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  14. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  15. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.

  16. A practical indoor context-aware surveillance system with multi-Kinect sensors

    NASA Astrophysics Data System (ADS)

    Jia, Lili; You, Ying; Li, Tiezhu; Zhang, Shun

    2014-11-01

    In this paper we develop a novel practical application, which give scalable services to the end users when abnormal actives are happening. Architecture of the application has been presented consisting of network infrared cameras and a communication module. In this intelligent surveillance system we use Kinect sensors as the input cameras. Kinect is an infrared laser camera which its user can access the raw infrared sensor stream. We install several Kinect sensors in one room to track the human skeletons. Each sensor returns the body positions with 15 coordinates in its own coordinate system. We use calibration algorithms to calibrate all the body positions points into one unified coordinate system. With the body positions points, we can infer the surveillance context. Furthermore, the messages from the metadata index matrix will be sent to mobile phone through communication module. User will instantly be aware of an abnormal case happened in the room without having to check the website. In conclusion, theoretical analysis and experimental results in this paper show that the proposed system is reasonable and efficient. And the application method introduced in this paper is not only to discourage the criminals and assist police in the apprehension of suspects, but also can enabled the end-users monitor the indoor environments anywhere and anytime by their phones.

  17. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  18. Photogrammetry Tool for Forensic Analysis

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.

  19. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  20. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  1. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  2. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  3. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  4. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  5. Fisheye camera around view monitoring system

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen

    2018-04-01

    360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.

  6. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  7. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  8. FieldSAFE: Dataset for Obstacle Detection in Agriculture.

    PubMed

    Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm

    2017-11-09

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.

  9. FieldSAFE: Dataset for Obstacle Detection in Agriculture

    PubMed Central

    Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik

    2017-01-01

    In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383

  10. Geodetic results from ISAGEX data. [for obtaining center of mass coordinates for geodetic camera sites

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Walls, D. M.

    1974-01-01

    Laser and camera data taken during the International Satellite Geodesy Experiment (ISAGEX) were used in dynamical solutions to obtain center-of-mass coordinates for the Astro-Soviet camera sites at Helwan, Egypt, and Oulan Bator, Mongolia, as well as the East European camera sites at Potsdam, German Democratic Republic, and Ondrejov, Czechoslovakia. The results are accurate to about 20m in each coordinate. The orbit of PEOLE (i=15) was also determined from ISAGEX data. Mean Kepler elements suitable for geodynamic investigations are presented.

  11. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  12. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  13. Visible light communication based vehicle positioning using LED street light and rolling shutter CMOS sensors

    NASA Astrophysics Data System (ADS)

    Do, Trong Hop; Yoo, Myungsik

    2018-01-01

    This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.

  14. Precise Selenodetic Coordinate System on Artificial Light Refers

    NASA Astrophysics Data System (ADS)

    Bagrov, Alexander; Pichkhadze, Konstantin M.; Sysoev, Valentin

    Historically a coordinate system for the Moon was established on the base of telescopic observations from the Earth. As the angular resolution of Earth-to-Space telescopic observations is limited by Earth atmosphere, and is ordinary worse then 1 ang. second, the mean accuracy of selenodetic coordinates is some angular minutes, which corresponds to errors about 900 meters for positions of lunar objects near center of visible lunar disk, and at least twice more when objects are near lunar poles. As there are no Global Positioning System nor any astronomical observation instruments on the Moon, we proposed to use an autonomous light beacon on the Luna-Globe landing module to fix its position on the surface of the moon ant to use it as refer point for fixation of spherical coordinates system for the Moon. The light beacon is designed to be surely visible by orbiting probe TV-camera. As any space probe has its own stars-orientation system, there is not a problem to calculate a set of directions to the beacon and to the referent stars in probe-centered coordinate system during flight over the beacon. Large number of measured angular positions and time of each observation will be enough to calculate both orbital parameters of the probe and selenodetic coordinates of the beacon by methods of geodesy. All this will allow fixing angular coordinates of any feature of lunar surface in one global coordinate system, referred to the beacon. The satellite’s orbit plane contains ever the center mass of main body, so if the beacon will be placed closely to a lunar pole, we shall determine pole point position of the Moon with accuracy tens times better then it is known now. When angular accuracy of self-orientation by stars of the orbital module of Luna-Glob mission will be 6 angular seconds, then being in circular orbit with height of 200 km the on-board TV-camera will allow calculation of the beacon position as well as 6" corresponding to spatial resolution of the camera. It mean that coordinates of the beacon will be determined with accuracy not worse then 6 meters on the lunar surface. Much more accuracy can be achieved if orbital probe will use as precise angular measurer as optical interferometer. The limiting accuracy of proposed method is far above any reasonable level, because it may be sub-millimeter one. Theoretical analysis shows that for achievement of 1-meter accuracy of coordinate measuring over lunar globe it will be enough to disperse over it surface some 60 light beacons. Designed by Lavochkin Association light beacon is autonomous one, and it will work at least 10 years, so coordinate frame of any other lunar mission could use established selenodetic coordinates during this period. The same approach may be used for establishing Martial coordinates system.

  15. Automated generation of image products for Mars Exploration Rover Mission tactical operations

    NASA Technical Reports Server (NTRS)

    Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen

    2005-01-01

    This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.

  16. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  17. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  18. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  19. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  20. Low-cost panoramic infrared surveillance system

    NASA Astrophysics Data System (ADS)

    Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George

    2017-05-01

    A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.

  1. Analysis of Photogrammetry Data from ISIM Mockup

    NASA Technical Reports Server (NTRS)

    Nowak, Maria; Hill, Mike

    2007-01-01

    During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software, proprietary software owned by Geodetic Systems Inc. The primary objectives of the metrology performed on the ISIM mock-up were (1) to quantify the accuracy of the INCA3 photogrammetry camera on a representative full scale version of the ISIM structure at ambient temperature by comparing the measurements obtained with this camera to measurements using the Leica laser tracker system and (2), empirically determine the smallest increment of target position movement that can be resolved by the PG camera in the test setup, i.e., precision, or resolution. In addition, the geometrical details of the test setup defined during the mockup testing, such as target locations and camera positions, will contribute to the final design of the photogrammetry system to be used on the ISIM Flight Structure.

  2. Non-contact measurement of rotation angle with solo camera

    NASA Astrophysics Data System (ADS)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  3. SU-E-T-570: New Quality Assurance Method Using Motion Tracking for 6D Robotic Couches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheon, W; Cho, J; Ahn, S

    Purpose: To accommodate geometrically accurate patient positioning, a robotic couch that is capable of 6-degrees of freedom has been introduced. However, conventional couch QA methods are not sufficient to enable the necessary accuracy of tests. Therefore, we have developed a camera based motion detection and geometry calibration system for couch QA. Methods: Employing a Visual-Tracking System (VTS, BonitaB10, Vicon, UK) which tracks infrared reflective(IR) markers, camera calibration was conducted using a 5.7 × 5.7 × 5.7 cm{sup 3} cube attached with IR markers at each corner. After positioning a robotic-couch at the origin with the cube on the table top,more » 3D coordinates of the cube’s eight corners were acquired by VTS in the VTS coordinate system. Next, positions in reference coordinates (roomcoordinates) were assigned using the known relation between each point. Finally, camera calibration was completed by finding a transformation matrix between VTS and reference coordinate systems and by applying a pseudo inverse matrix method. After the calibration, the accuracy of linear and rotational motions as well as couch sagging could be measured by analyzing the continuously acquired data of the cube while the couch moves to a designated position. Accuracy of the developed software was verified through comparison with measurement data when using a Laser tracker (FARO, Lake Mary, USA) for a robotic-couch installed for proton therapy. Results: VTS system could track couch motion accurately and measured position in room-coordinates. The VTS measurements and Laser tracker data agreed within 1% of difference for linear and rotational motions. Also because the program analyzes motion in 3-Dimension, it can compute couch sagging. Conclusion: Developed QA system provides submillimeter/ degree accuracy which fulfills the high-end couch QA. This work was supported by the National Research Foundation of Korea funded by Ministry of Science, ICT & Future Planning. (2013M2A2A7043507 and 2012M3A9B6055201)« less

  4. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  5. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  6. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  7. Positioning sensor by combining optical projection and photogrammetry

    NASA Astrophysics Data System (ADS)

    Zheng, Benrui

    Six spatial parameters, (x, y, z) for translation, and pitch, roll, and yaw for rotation, are used to describe the 3-dimensional position and orientation of a rigid body---the 6 degrees of freedom (DOF). The ability to measure these parameters is required in a diverse range of applications including machine tool metrology, robot calibration, motion control, motion analysis, and reconstructive surgery. However, there are limitations associated with the currently available measurement systems. Shortcomings include some of the following: short dynamic range, limited accuracy, line of sight restrictions, and capital cost. The objective of this dissertation was to develop a new metrology system that overcomes line of sight restrictions, reduces system costs, allows large dynamic range and has the potential to provide high measurement accuracy. The new metrology system proposed in this dissertation is based on a combination of photogrammetry and optical pattern projection. This system has the potential to enable real-time measurement of a small lightweight module's location. The module generates an optical pattern that is observable on the surrounding walls, and photogrammetry is used to measure the absolute coordinates of features in the projected optical pattern with respect to a defined global coordinate system. By combining these absolute coordinates with the known angular information of the optical projection beams, a minimization algorithm can be used to extract the absolute coordinates and angular orientation of the module itself. The feasibility of the proposed metrology system was first proved through preliminary experimental tests. By using a module with a 7x7 dot matrix pattern, experimental agreement of 1 to 5 parts in 103 was obtained by translating the module over 0.9 m and by rotating it through 60°. The proposed metrology system was modeled through numerical simulations and factors affecting the uncertainty of the measurement were investigated. The simulation results demonstrate that optimum design of the projected pattern gives a lower associated measurement uncertainty than is possible by direct photogrammetric measurement with traditional tie points alone. Based on the simulation results, a few improvements have been made to the proposed metrology systems. These improvements include using a module with larger full view angle and larger number of dots, performing angle calibration for the module, using a virtual camera approach to determine the module location and employing multiple coordinates system for large range rotation measurement. With the new proposed virtual camera approach, experimental agreement at the level of 3 parts in 104 was observed for the one dimension translation test. The virtual camera approach is faster than the algorithm and an additional minimization analysis is no longer needed. In addition, the virtual camera approach offers an additional benefit that it is no longer necessary to identify all dots in the pattern and so is more amenable to use in realistic and usually complicated environments. A preliminary rotation test over 120° was conducted by tying three coordinate systems together. It was observed that the absolute values of the angle differences between the measured angle and the encoder reading are smaller than 0.23° for all measurements. It is found that this proposed metrology system has the ability to measure larger angle range (up to 360°) by using multiple coordinate systems. The uncertainty analysis of the proposed system was performed through Monte Carlo simulation and it was demonstrated that the experimental results are consistent with the analysis.

  8. [Optimization of end-tool parameters based on robot hand-eye calibration].

    PubMed

    Zhang, Lilong; Cao, Tong; Liu, Da

    2017-04-01

    A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.

  9. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  10. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  11. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  12. A new position measurement system using a motion-capture camera for wind tunnel tests.

    PubMed

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-09-13

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.

  13. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    PubMed Central

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-01-01

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600

  14. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  15. Integrated calibration of multiview phase-measuring profilometry

    NASA Astrophysics Data System (ADS)

    Lee, Yeong Beum; Kim, Min H.

    2017-11-01

    Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.

  16. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  17. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  18. Plate measurement techniques and reduction methods used by the West German satellite observers, and resulting consequences for the observation

    NASA Technical Reports Server (NTRS)

    Deker, H.

    1971-01-01

    The West German tracking stations are equipped with ballistic cameras. Plate measurement and plate reduction must therefore follow photogrammetric methods. Approximately 100 star positions and 200 satellite positions are measured on each plate. The mathematical model for spatial rotation of the bundle of rays is extended by including terms for distortion and internal orientation of the camera as well as by providing terms for refraction which are computed for the measured coordinates of the star positions on the plate. From the measuring accuracy of the plate coordinates it follows that the timing accuracy for the exposures has to be about one millisecond, in order to obtain a homogeneous system.

  19. Sharing resources, coordinating response : deploying and operating incident management systems

    DOT National Transportation Integrated Search

    1999-01-05

    This brochure describes how cost-effective incident management technologies can be useful in handling traffic congestion. Embedded sensors, closed circuit television cameras, and variable message signs are examples of existing technologies that can b...

  20. Software for Photometric and Astrometric Reduction of Video Meteors

    NASA Astrophysics Data System (ADS)

    Atreya, Prakash; Christou, Apostolos

    2007-12-01

    SPARVM is a Software for Photometric and Astrometric Reduction of Video Meteors being developed at Armagh Observatory. It is written in Interactive Data Language (IDL) and is designed to run primarily under Linux platform. The basic features of the software will be derivation of light curves, estimation of angular velocity and radiant position for single station data. For double station data, calculation of 3D coordinates of meteors, velocity, brightness, and estimation of meteoroid's orbit including uncertainties. Currently, the software supports extraction of time and date from video frames, estimation of position of cameras (Azimuth, Altitude), finding stellar sources in video frames and transformation of coordinates from video, frames to Horizontal coordinate system (Azimuth, Altitude), and Equatorial coordinate system (RA, Dec).

  1. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  2. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  3. Phase-stepped fringe projection by rotation about the camera's perspective center.

    PubMed

    Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J

    2011-09-12

    A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.

  4. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  5. Railway clearance intrusion detection method with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  6. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  7. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  8. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  9. New information technology tools for a medical command system for mass decontamination.

    PubMed

    Fuse, Akira; Okumura, Tetsu; Hagiwara, Jun; Tanabe, Tomohide; Fukuda, Reo; Masuno, Tomohiko; Mimura, Seiji; Yamamoto, Kaname; Yokota, Hiroyuki

    2013-06-01

    In a mass decontamination during a nuclear, biological, or chemical (NBC) response, the capability to command, control, and communicate is crucial for the proper flow of casualties at the scene and their subsequent evacuation to definitive medical facilities. Information Technology (IT) tools can be used to strengthen medical control, command, and communication during such a response. Novel IT tools comprise a vehicle-based, remote video camera and communication network systems. During an on-site verification event, an image from a remote video camera system attached to the personal protective garment of a medical responder working in the warm zone was transmitted to the on-site Medical Commander for aid in decision making. Similarly, a communication network system was used for personnel at the following points: (1) the on-site Medical Headquarters; (2) the decontamination hot zone; (3) an on-site coordination office; and (4) a remote medical headquarters of a local government office. A specially equipped, dedicated vehicle was used for the on-site medical headquarters, and facilitated the coordination with other agencies. The use of these IT tools proved effective in assisting with the medical command and control of medical resources and patient transport decisions during a mass-decontamination exercise, but improvements are required to overcome transmission delays and camera direction settings, as well as network limitations in certain areas.

  10. Developing a multi-Kinect-system for monitoring in dairy cows: object recognition and surface analysis using wavelets.

    PubMed

    Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W

    2016-09-01

    Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.

  11. Testbed for remote telepresence research

    NASA Astrophysics Data System (ADS)

    Adnan, Sarmad; Cheatham, John B., Jr.

    1992-11-01

    Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  12. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  13. Planetary Image Geometry Library

    NASA Technical Reports Server (NTRS)

    Deen, Robert C.; Pariser, Oleg

    2010-01-01

    The Planetary Image Geometry (PIG) library is a multi-mission library used for projecting images (EDRs, or Experiment Data Records) and managing their geometry for in-situ missions. A collection of models describes cameras and their articulation, allowing application programs such as mosaickers, terrain generators, and pointing correction tools to be written in a multi-mission manner, without any knowledge of parameters specific to the supported missions. Camera model objects allow transformation of image coordinates to and from view vectors in XYZ space. Pointing models, specific to each mission, describe how to orient the camera models based on telemetry or other information. Surface models describe the surface in general terms. Coordinate system objects manage the various coordinate systems involved in most missions. File objects manage access to metadata (labels, including telemetry information) in the input EDRs and RDRs (Reduced Data Records). Label models manage metadata information in output files. Site objects keep track of different locations where the spacecraft might be at a given time. Radiometry models allow correction of radiometry for an image. Mission objects contain basic mission parameters. Pointing adjustment ("nav") files allow pointing to be corrected. The object-oriented structure (C++) makes it easy to subclass just the pieces of the library that are truly mission-specific. Typically, this involves just the pointing model and coordinate systems, and parts of the file model. Once the library was developed (initially for Mars Polar Lander, MPL), adding new missions ranged from two days to a few months, resulting in significant cost savings as compared to rewriting all the application programs for each mission. Currently supported missions include Mars Pathfinder (MPF), MPL, Mars Exploration Rover (MER), Phoenix, and Mars Science Lab (MSL). Applications based on this library create the majority of operational image RDRs for those missions. A Java wrapper around the library allows parts of it to be used from Java code (via a native JNI interface). Future conversions of all or part of the library to Java are contemplated.

  14. Dimensional coordinate measurements: application in characterizing cervical spine motion

    NASA Astrophysics Data System (ADS)

    Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan

    2014-06-01

    Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.

  15. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  16. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  17. Optical 3D-coordinate measuring system using structured light

    NASA Astrophysics Data System (ADS)

    Schreiber, Wolfgang; Notni, Gunther; Kuehmstedt, Peter; Gerber, Joerg; Kowarschik, Richard M.

    1996-09-01

    The paper is aimed at the description of an optical shape measuring technique based on a consistent principle using fringe projection technique. We demonstrate a real 3D- coordinate measuring system where the sale of coordinates is given only by the illumination-structures. This method has the advantages that the aberration of the observing system and the depth-dependent imaging scale have no influence on the measuring accuracy and, moreover, the measurements are independent of the position of the camera with respect to the object under test. Furthermore, it is shown that the influence of specular effects of the surface on the measuring result can be eliminated. Moreover, we developed a very simple algorithm to calibrate the measuring system. The measuring examples show that a measuring accuracy of 10-4 (i.e. 10 micrometers ) within an object volume of 100 X 100 X 70 mm3 is achievable. Furthermore, it is demonstrated that the set of coordinate values can be processed in CNC- and CAD-systems.

  18. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  19. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  20. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  1. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  2. Full High-definition three-dimensional gynaecological laparoscopy--clinical assessment of a new robot-assisted device.

    PubMed

    Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus

    2014-01-01

    To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.

  3. The Use Of Videography For Three-Dimensional Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.

    1988-02-01

    Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.

  4. Single camera photogrammetry system for EEG electrode identification and localization.

    PubMed

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  5. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  6. A computer system to analyze showers in nuclear emulsions: Center Director's discretionary fund report

    NASA Technical Reports Server (NTRS)

    Meegan, C. A.; Fountain, W. F.; Berry, F. A., Jr.

    1987-01-01

    A system to rapidly digitize data from showers in nuclear emulsions is described. A TV camera views the emulsions though a microscope. The TV output is superimposed on the monitor of a minicomputer. The operator uses the computer's graphics capability to mark the positions of particle tracks. The coordinates of each track are stored on a disk. The computer then predicts the coordinates of each track through successive layers of emulsion. The operator, guided by the predictions, thus tracks and stores the development of the shower. The system provides a significant improvement over purely manual methods of recording shower development in nuclear emulsion stacks.

  7. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    NASA Astrophysics Data System (ADS)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  8. Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua

    2018-05-01

    Three-dimensional (3D) shape measurement based on fringe pattern projection techniques has been commonly used in various fields. One of the remaining challenges in fringe pattern projection is that camera sensor saturation may occur if there is a large range of reflectivity variation across the surface that causes measurement errors. To overcome this problem, a novel fringe pattern projection method is proposed to avoid image saturation and maintain high-intensity modulation for measuring shiny surfaces by adaptively adjusting the pixel-to-pixel projection intensity according to the surface reflectivity. First, three sets of orthogonal color fringe patterns and a sequence of uniform gray-level patterns with different gray levels are projected onto a measured surface by a projector. The patterns are deformed with respect to the object surface and captured by a camera from a different viewpoint. Subsequently, the optimal projection intensity at each pixel is determined by fusing different gray levels and transforming the camera pixel coordinate system into the projector pixel coordinate system. Finally, the adapted fringe patterns are created and used for 3D shape measurement. Experimental results on a flat checkerboard and shiny objects demonstrate that the proposed method can measure shiny surfaces with high accuracy.

  9. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  10. Naval Biodynamics Laboratory 1993 Command History

    DTIC Science & Technology

    1993-01-01

    position and alignment, camera optical calibration, photo target position, and standard anatomical coordinate systems based upon X-rays of each HRV...safety range. Before, during, and after each sled run, a physiological data acquisition system is used to collect and analyze physiological measurements ...experimental devices. It is also responsible for the configuring of field data measuring and acquisition systems for use aboard ships or at other field

  11. Small format digital photogrammetry for applications in the earth sciences

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, Dirk

    2010-05-01

    Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.

  12. Solar-Powered Airplane with Cameras and WLAN

    NASA Technical Reports Server (NTRS)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  13. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  14. A digital system for surface reconstruction

    USGS Publications Warehouse

    Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.

    1996-01-01

    A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.

  15. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  16. Videogrammetric Model Deformation Measurement System User's Manual

    NASA Technical Reports Server (NTRS)

    Dismond, Harriett R.

    2002-01-01

    The purpose of this manual is to provide the user of the NASA VMD system, running the MDef software, Version 1.10, all information required to operate the system. The NASA Videogrammetric Model Deformation system consists of an automated videogrammetric technique used to measure the change in wing twist and bending under aerodynamic load in a wind tunnel. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two-dimensional coordinates of wing targets with fixed (and known) third dimensional coordinate, namely the span-wise location. The major consideration in the development of the measurement system was that productivity must not be appreciably reduced.

  17. Development of a New Low-Cost Indoor Mapping System - System Design, System Calibration and First Results

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.; Tschirschwitz, F.

    2016-06-01

    For mapping of building interiors various 2D and 3D indoor surveying systems are available today. These systems essentially differ from each other by price and accuracy as well as by the effort required for fieldwork and post-processing. The Laboratory for Photogrammetry & Laser Scanning of HafenCity University (HCU) Hamburg has developed, as part of an industrial project, a lowcost indoor mapping system, which enables systematic inventory mapping of interior facilities with low staffing requirements and reduced, measurable expenditure of time and effort. The modelling and evaluation of the recorded data take place later in the office. The indoor mapping system of HCU Hamburg consists of the following components: laser range finder, panorama head (pan-tilt-unit), single-board computer (Raspberry Pi) with digital camera and battery power supply. The camera is pre-calibrated in a photogrammetric test field under laboratory conditions. However, remaining systematic image errors are corrected simultaneously within the generation of the panorama image. Due to cost reasons the camera and laser range finder are not coaxially arranged on the panorama head. Therefore, eccentricity and alignment of the laser range finder against the camera must be determined in a system calibration. For the verification of the system accuracy and the system calibration, the laser points were determined from measurements with total stations. The differences to the reference were 4-5mm for individual coordinates.

  18. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  19. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.

    1999-01-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.

  20. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  1. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  2. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  3. Three-dimensional cinematography with control object of unknown shape.

    PubMed

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  4. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    USDA-ARS?s Scientific Manuscript database

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  5. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  6. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  7. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  8. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  9. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Light field analysis and its applications in adaptive optics and surveillance systems

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed Ali

    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.

  11. Target-based calibration method for multifields of view measurement using multiple stereo digital image correlation systems

    NASA Astrophysics Data System (ADS)

    Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan

    2017-12-01

    Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.

  12. Technical and instrumental prerequisites for single-port laparoscopic solo surgery: state of art.

    PubMed

    Kim, Say-June; Lee, Sang Chul

    2015-04-21

    With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator's eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder.

  13. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.

    PubMed

    Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young

    2016-04-01

    In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.

  14. Multisensory System for Fruit Harvesting Robots. Experimental Testing in Natural Scenarios and with Different Kinds of Crops

    PubMed Central

    Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier

    2014-01-01

    The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730

  15. Deviation rectification for dynamic measurement of rail wear based on coordinate sets projection

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ma, Ziji; Li, Yanfu; Zeng, Jiuzhen; Jin, Tan; Liu, Hongli

    2017-10-01

    Dynamic measurement of rail wear using a laser imaging system suffers from random vibrations in the laser-based imaging sensor which cause distorted rail profiles. In this paper, a simple and effective method for rectifying profile deviation is presented to address this issue. There are two main steps: profile recognition and distortion calibration. According to the constant camera and projector parameters, efficient recognition of measured profiles is achieved by analyzing the geometric difference between normal profiles and distorted ones. For a distorted profile, by constructing coordinate sets projecting from it to the standard one on triple projecting primitives, including the rail head inner line, rail waist curve and rail jaw, iterative extrinsic camera parameter self-compensation is implemented. The distortion is calibrated by projecting the distorted profile onto the x-y plane of a measuring coordinate frame, which is parallel to the rail cross section, to eliminate the influence of random vibrations in the laser-based imaging sensor. As well as evaluating the implementation with comprehensive experiments, we also compare our method with other published works. The results exhibit the effectiveness and superiority of our method for the dynamic measurement of rail wear.

  16. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  17. Location precision analysis of stereo thermal anti-sniper detection system

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Lu, Ya; Zhang, Xiaoyan; Jin, Weiqi

    2012-06-01

    Anti-sniper detection devices are the urgent requirement in modern warfare. The precision of the anti-sniper detection system is especially important. This paper discusses the location precision analysis of the anti-sniper detection system based on the dual-thermal imaging system. It mainly discusses the following two aspects which produce the error: the digital quantitative effects of the camera; effect of estimating the coordinate of bullet trajectory according to the infrared images in the process of image matching. The formula of the error analysis is deduced according to the method of stereovision model and digital quantitative effects of the camera. From this, we can get the relationship of the detecting accuracy corresponding to the system's parameters. The analysis in this paper provides the theory basis for the error compensation algorithms which are put forward to improve the accuracy of 3D reconstruction of the bullet trajectory in the anti-sniper detection devices.

  18. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  19. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  20. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    NASA Astrophysics Data System (ADS)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  1. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  2. Mapping gray-scale image to 3D surface scanning data by ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jones, Peter R. M.

    1997-03-01

    The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.

  3. A global station coordinate solution based upon camera and laser data - GSFC 1973

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Klosko, S. M.

    1973-01-01

    Results for the geocentric coordinates of 72 globally distributed satellite tracking stations consisting of 58 cameras and 14 lasers are presented. The observational data for this solution consists of over 65,000 optical observations and more than 350 laser passes recorded during the National Geodetic Satellite Program, the 1968 Centre National d'Etudes Spatiales/Smithsonian Astrophysical Observatory (SAO) Program, and International Satellite Geodesy Experiment Program. Dynamic methods were used. The data were analyzed with the GSFC GEM and SAO 1969 Standard Earth Gravity Models. The recent value of GM = 398600.8 cu km/sec square derived at the Jet Propulsion Laboratory (JPL) gave the best results for this combination laser/optical solution. Solutions are made with the deep space solution of JPL (LS-25 solution) including results obtained at GSFC from Mariner-9 Unified B-Band tracking. Datum transformation parameters relating North America, Europe, South America, and Australia are given, enabling the positions of some 200 other tracking stations to be placed in the geocentric system.

  4. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  5. Multi-camera digital image correlation method with distributed fields of view

    NASA Astrophysics Data System (ADS)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  6. Augmented reality image guidance for minimally invasive coronary artery bypass

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Rueckert, Daniel; Hawkes, David; Casula, Roberto; Hu, Mingxing; Pedro, Ose; Zhang, Dong Ping; Penney, Graeme; Bello, Fernando; Edwards, Philip

    2008-03-01

    We propose a novel system for image guidance in totally endoscopic coronary artery bypass (TECAB). A key requirement is the availability of 2D-3D registration techniques that can deal with non-rigid motion and deformation. Image guidance for TECAB is mainly required before the mechanical stabilization of the heart, thus the most dominant source of non-rigid deformation is the motion of the beating heart. To augment the images in the endoscope of the da Vinci robot, we have to find the transformation from the coordinate system of the preoperative imaging modality to the system of the endoscopic cameras. In a first step we build a 4D motion model of the beating heart. Intraoperatively we can use the ECG or video processing to determine the phase of the cardiac cycle. We can then take the heart surface from the motion model and register it to the stereo-endoscopic images of the da Vinci robot using 2D-3D registration methods. We are investigating robust feature tracking and intensity-based methods for this purpose. Images of the vessels available in the preoperative coordinate system can then be transformed to the camera system and projected into the calibrated endoscope view using two video mixers with chroma keying. It is hoped that the augmented view can improve the efficiency of TECAB surgery and reduce the conversion rate to more conventional procedures.

  7. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  8. Distinction of Green Sweet Peppers by Using Various Color Space Models and Computation of 3 Dimensional Location Coordinates of Recognized Green Sweet Peppers Based on Parallel Stereovision System

    NASA Astrophysics Data System (ADS)

    Bachche, Shivaji; Oka, Koichi

    2013-06-01

    This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.

  9. Polymorphic robotic system controlled by an observing camera

    NASA Astrophysics Data System (ADS)

    Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas

    2010-02-01

    Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.

  10. Coordinates of anthropogenic features on the Moon

    NASA Astrophysics Data System (ADS)

    Wagner, R. V.; Nelson, D. M.; Plescia, J. B.; Robinson, M. S.; Speyerer, E. J.; Mazarico, E.

    2017-02-01

    High-resolution images from the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) reveal the landing locations of recent and historic spacecraft and associated impact sites across the lunar surface. Using multiple images of each site acquired between 2009 and 2015, an improved Lunar Reconnaissance Orbiter (LRO) ephemeris, and a temperature-dependent camera orientation model, we derived accurate coordinates (<12 m) for each soft-landed spacecraft, rover, deployed scientific payload, and spacecraft impact crater that we have identified. Accurate coordinates enhance the scientific interpretations of data returned by the surface instruments and of returned samples of the Apollo and Luna sites. In addition, knowledge of the sizes and positions of craters formed as the result of impacting spacecraft provides key benchmarks into the relationship between energy and crater size, as well as calibration points for reanalyzing seismic measurements acquired during the Apollo program. We identified the impact craters for the three spacecraft that impacted the surface during the LRO mission by comparing before and after NAC images.

  11. Coordinates of Anthropogenic Features on the Moon

    NASA Technical Reports Server (NTRS)

    Wagner, R. V.; Nelson, D. M.; Plescia, J. B.; Robinson, M. S.; Speyerer , E. J.; Mazarico, E.

    2016-01-01

    High-resolution images from the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) reveal the landing locations of recent and historic spacecraft and associated impact sites across the lunar surface. Using multiple images of each site acquired between 2009 and 2015, an improved Lunar Reconnaissance Orbiter (LRO) ephemeris, and a temperature-dependent camera orientation model, we derived accurate coordinates ( less than 12 meters) for each soft-landed spacecraft, rover, deployed scientific payload, and spacecraft impact crater that we have identified. Accurate coordinates enhance the scientific interpretations of data returned by the surface instruments and of returned samples of the Apollo and Luna sites. In addition, knowledge of the sizes and positions of craters formed as the result of impacting spacecraft provides key benchmarks into the relationship between energy and crater size, as well as calibration points for reanalyzing seismic measurements acquired during the Apollo program. We identified the impact craters for the three spacecraft that impacted the surface during the LRO mission by comparing before and after NAC images.

  12. Optical fixing the positions of the off-shore objects applying the method of two reference points

    NASA Astrophysics Data System (ADS)

    Naus, Krzysztof; Szulc, Dariusz

    2014-06-01

    The Paper presents the optical method of fixing the off-shore objects positions from the land. The method is based on application of two reference points, having the geographical coordinates defined. The first point was situated high on the sea shore, where also the camera was installed. The second point was intended for use to determine the topocentric horizon plane and it was situated at the water-level. The first section of the Paper contains the definition of space and disposed therein reference systems: connected with the Earth, water-level and the camera system. The second section of the Paper provides a description of the survey system model and the principles of the Charge Coupled Device - CCD array pixel's coordinates (plate coordinates) transformation into the geographic coordinates located on the water-level. In the final section there are presented the general rules of using the worked out method in the optical system. W artykule przedstawiono optyczną metodę wyznaczania pozycji obiektów nawodnych z lądu. Oparto ją na dwóch punktach odniesienia o znanych współrzędnych geograficznych. Pierwszy umiejscowiono wysoko na brzegu morza i przeznaczono do zamontowania kamery. Drugi przeznaczono do określania płaszczyzny horyzontu topocentrycznego i umiejscowiono na poziomie lustra wody. W pierwszej części artykułu zdefi niowano przestrzeń i rozmieszczone w niej układy odniesienia: związany z Ziemią, poziomem lustra wody i kamerą. Drugą część artykułu stanowi opis modelu układu pomiarowego oraz zasad transformacji współrzędnych piksela (tłowych) z matrycy CCD na współrzędne geograficzne punktu umiejscowionego na poziomie lustra wody. W części końcowej zaprezentowano ogólne zasady wykorzystywania opracowanej metody w systemie optycznym.

  13. General theory of remote gaze estimation using the pupil center and corneal reflections.

    PubMed

    Guestrin, Elias Daniel; Eizenman, Moshe

    2006-06-01

    This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.

  14. Technical and instrumental prerequisites for single-port laparoscopic solo surgery: State of art

    PubMed Central

    Kim, Say-June; Lee, Sang Chul

    2015-01-01

    With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator’s eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder. PMID:25914453

  15. A multiscale video system for studying an optical phenomena during active experiments in the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Nikolashkin, S. V.; Reshetnikov, A. A.

    2017-11-01

    The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.

  16. Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-04-01

    3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.

  17. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  18. Calculation of 3D Coordinates of a Point on the Basis of a Stereoscopic System

    NASA Astrophysics Data System (ADS)

    Mussabayev, R. R.; Kalimoldayev, M. N.; Amirgaliyev, Ye. N.; Tairova, A. T.; Mussabayev, T. R.

    2018-05-01

    The solution of three-dimensional (3D) coordinate calculation task for a material point is considered. Two flat images (a stereopair) which correspond to the left and to the right viewpoints of a 3D scene are used for this purpose. The stereopair is obtained using two cameras with parallel optical axes. The analytical formulas for calculating 3D coordinates of a material point in the scene were obtained on the basis of analysis of the stereoscopic system optical and geometrical schemes. The detailed presentation of the algorithmic and hardware realization of the given method was discussed with the the practical. The practical module was recommended for the determination of the optical system unknown parameters. The series of experimental investigations were conducted for verification of theoretical results. During these experiments the minor inaccuracies were occurred by space distortions in the optical system and by it discrecity. While using the high quality stereoscopic system, the existing calculation inaccuracy enables to apply the given method for the wide range of practical tasks.

  19. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  20. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  1. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  2. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  3. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  4. Calibration and verification of thermographic cameras for geometric measurements

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.

  5. Multiplexed time-lapse photomicrography of cultured cells.

    PubMed

    Heye, R R; Kiebler, E W; Arnzen, R J; Tolmach, L J

    1982-01-01

    A system of cinemicrography has been developed in which a single microscope and 16 mm camera are multiplexed to produce a time-lapse photographic record of many fields simultaneously. The field coordinates and focus are selected via a control console and entered into the memory of a dedicated microcomputer; they are then automatically recalled in sequence, thus permitting the photographing of additional fields in the interval between exposures of any given field. Sequential exposures of each field are isolated in separate sections of the film by means of a specially designed random-access camera that is also controlled by the microcomputer. The need to unscramble frames is thereby avoided, and the developed film can be directly analysed.

  6. Close Range Calibration of Long Focal Length Lenses in a Changing Environment

    NASA Astrophysics Data System (ADS)

    Robson, Stuart; MacDonald, Lindsay; Kyle, Stephen; Shortis, Mark R.

    2016-06-01

    University College London is currently developing a large-scale multi-camera system for dimensional control tasks in manufacturing, including part machining, assembly and tracking, as part of the Light Controlled Factory project funded by the UK Engineering and Physical Science Research Council. In parallel, as part of the EU LUMINAR project funded by the European Association of National Metrology Institutes, refraction models of the atmosphere in factory environments are being developed with the intent of modelling and eliminating the effects of temperature and other variations. The accuracy requirements for both projects are extremely demanding, so accordingly improvements in the modelling of both camera imaging and the measurement environment are essential. At the junction of these two projects lies close range camera calibration. The accurate and reliable calibration of cameras across a realistic range of atmospheric conditions in the factory environment is vital in order to eliminate systematic errors. This paper demonstrates the challenge of experimentally isolating environmental effects at the level of a few tens of microns. Longer lines of sight promote the use and calibration of a near perfect perspective projection from a Kern 75mm lens with maximum radial distortion of the order of 0.5m. Coordination of a reference target array, representing a manufactured part, is achieved to better than 0.1mm at a standoff of 8m. More widely, results contribute to better sensor understanding, improved mathematical modelling of factory environments and more reliable coordination of targets to 0.1mm and better over large volumes.

  7. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  8. Real-time detection and data acquisition system for the left ventricular outline. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Reiber, J. H. C.

    1976-01-01

    To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.

  9. Three-dimensional digitizer for the footwear industry

    NASA Astrophysics Data System (ADS)

    Gonzalez, Francisco; Campoy, Pascual; Aracil, Rafael; Penafiel, Francisco; Sebastian, Jose M.

    1993-12-01

    This paper presents a developed system for digitizing 3D objects in the footwear industry (e.g. mould, soles, heels) and their introduction in a CAD system for further manipulation and production of rapid prototypes. The system is based on the acquisition of the sequence of images of the projection of a laser line onto the 3D object when this is moving in front of the laser beam and the camera. This beam projection lights a 3D curve on the surface of the object, whose image is processed in order to obtain the 3D coordinates of every point of mentioned curve according to a previous calibration of the system. These coordinates of points in all the curves are analyzed and combined in order to make up a 3D wire-frame model of the object, which is introduced in a CAD station for further design and connection to the machinery for rapid prototyping.

  10. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    DTIC Science & Technology

    2015-03-26

    camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene

  11. Thermal imaging measurement of lateral diffusivity and non-invasive material defect detection

    DOEpatents

    Sun, Jiangang; Deemer, Chris

    2003-01-01

    A system and method for determining lateral thermal diffusivity of a material sample using a heat pulse; a sample oriented within an orthogonal coordinate system; an infrared camera; and a computer that has a digital frame grabber, and data acquisition and processing software. The mathematical model used within the data processing software is capable of determining the lateral thermal diffusivity of a sample of finite boundaries. The system and method may also be used as a nondestructive method for detecting and locating cracks within the material sample.

  12. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  13. Multispectral data processing from unmanned aerial vehicles: application in precision agriculture using different sensors and platforms

    NASA Astrophysics Data System (ADS)

    Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert

    2017-04-01

    Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.

  14. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  15. Implementation and Evaluation of a Mobile Mapping System Based on Integrated Range and Intensity Images for Traffic Signs Localization

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.

    2012-07-01

    Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  16. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  17. STS-56 ESC Earth observation of New York City at night

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-56 electronic still camera (ESC) Earth observation image shows New York City at night as recorded on the 64th orbit of Discovery, Orbiter Vehicle (OV) 103. The image was recorded with an image intensifier on the Hand-held, Earth-oriented, Real-time, Cooperative, User-friendly, Location-targeting and Environmental System (HERCULES). HERCULES is a device that makes it simple for shuttle crewmembers to take pictures of Earth as they merely point a modified 35mm camera and shoot any interesting feature, whose latitude and longitude are automatically determined in real-time. Center coordinates on this image are 40.665 degrees north latitude and 74.048 degrees west longitude. (1/60 second exposure). Digital file name is ESC04034.IMG.

  18. STS-56 ESC Earth observation of New Zealand (South Island)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-56 electronic still camera (ESC) Earth observation image shows New Zealand (South Island) as recorded on the 45th orbit of Discovery, Orbiter Vehicle (OV) 103. Westport is easily delineated in the image, which was recorded by the Hand-held, Earth-oriented, Real-time, Cooperative, User-friendly, Location-targeting and Environmental System (HERCULES). HERCULES is a device that makes it simple for shuttle crewmembers to take pictures of Earth as they merely point a modified 35mm camera and shoot any interesting feature, whose latitude and longitude are automatically determined in real-time. Center coordinates are 41.836 degrees south latitude and 171.641 degrees east longitude. (300mm lens, no filter). Digital file name is ESC07007.IMG.

  19. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  20. Analysis of Photogrammetry Data from ISIM Mockup, June 1, 2007

    NASA Technical Reports Server (NTRS)

    Nowak, Maria; Hill, Mike

    2007-01-01

    During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software.

  1. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  2. What you see is what you get: webcam placement influences perception and social coordination.

    PubMed

    Thomas, Laura E; Pemstein, Daniel

    2015-01-01

    Building on a well-established link between elevation and social power, we demonstrate that-when perceptual information is limited-subtle visual cues can shape people's representations of others and, in turn, alter strategic social behavior. A cue to elevation (unrelated to physical size) provided by the placement of web cameras in a video chat biased individuals' perceptions of a partner's height (Experiment 1) and shaped the extent to which they made decisions in their own self-interest: participants tended to coordinate their behavior in a manner that benefitted the preferences of a partner pictured from a low camera angle during a game of asymmetric coordination (Experiment 2). Our results suggest that people are vulnerable to the influence of a limited viewpoint when forming representations of others in a manner that shapes their strategic choices.

  3. SU-E-J-134: An Augmented-Reality Optical Imaging System for Accurate Breast Positioning During Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Malhotra, H; French, S

    Purpose: Breast radiotherapy, particularly electronic compensation, may involve large dose gradients and difficult patient positioning problems. We have developed a simple self-calibrating augmented-reality system, which assists in accurately and reproducibly positioning the patient, by displaying her live image from a single camera superimposed on the correct perspective projection of her 3D CT data. Our method requires only a standard digital camera capable of live-view mode, installed in the treatment suite at an approximately-known orientation and position (rotation R; translation T). Methods: A 10-sphere calibration jig was constructed and CT imaged to provide a 3D model. The (R,T) relating the cameramore » to the CT coordinate system were determined by acquiring a photograph of the jig and optimizing an objective function, which compares the true image points to points calculated with a given candidate R and T geometry. Using this geometric information, 3D CT patient data, viewed from the camera's perspective, is plotted using a Matlab routine. This image data is superimposed onto the real-time patient image, acquired by the camera, and displayed using standard live-view software. This enables the therapists to view both the patient's current and desired positions, and guide the patient into assuming the correct position. The method was evaluated using an in-house developed bolus-like breast phantom, mounted on a supporting platform, which could be tilted at various angles to simulate treatment-like geometries. Results: Our system allowed breast phantom alignment, with an accuracy of about 0.5 cm and 1 ± 0.5 degree. Better resolution could be possible using a camera with higher-zoom capabilities. Conclusion: We have developed an augmented-reality system, which combines a perspective projection of a CT image with a patient's real-time optical image. This system has the potential to improve patient setup accuracy during breast radiotherapy, and could possibly be used for other disease sites as well.« less

  4. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  5. Video-based Mobile Mapping System Using Smartphones

    NASA Astrophysics Data System (ADS)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  6. "Ipsilateral, high, single-hand, sideways"-Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery.

    PubMed

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng

    2016-10-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.

  7. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  8. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  9. Application of photogrammetry to transforming PIV-acquired velocity fields to a moving-body coordinate system

    NASA Astrophysics Data System (ADS)

    Nikoueeyan, Pourya; Naughton, Jonathan

    2016-11-01

    Particle Image Velocimetry is a common choice for qualitative and quantitative characterization of unsteady flows associated with moving bodies (e.g. pitching and plunging airfoils). Characterizing the separated flow behavior is of great importance in understanding the flow physics and developing predictive reduced-order models. In most studies, the model under investigation moves within a fixed camera field-of-view, and vector fields are calculated based on this fixed coordinate system. To better characterize the genesis and evolution of vortical structures in these unsteady flows, the velocity fields need to be transformed into the moving-body frame of reference. Data converted to this coordinate system allow for a more detailed analysis of the flow field using advanced statistical tools. In this work, a pitching NACA0015 airfoil has been used to demonstrate the capability of photogrammetry for such an analysis. Photogrammetry has been used first to locate the airfoil within the image and then to determine an appropriate mask for processing the PIV data. The photogrammetry results are then further used to determine the rotation matrix that transforms the velocity fields to airfoil coordinates. Examples of the important capabilities such a process enables are discussed. P. Nikoueeyan is supported by a fellowship from the University of Wyoming's Engineering Initiative.

  10. A high-resolution three-dimensional far-infrared thermal and true-color imaging system for medical applications.

    PubMed

    Cheng, Victor S; Bai, Jinfen; Chen, Yazhu

    2009-11-01

    As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.

  11. Optical variability of extragalactic objects used to tie the HIPPARCOS reference frame to an extragalactic system using Hubble space telescope observations

    NASA Technical Reports Server (NTRS)

    Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel

    1990-01-01

    Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.

  12. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  13. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  14. Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera

    PubMed Central

    Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing

    2018-01-01

    The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885

  15. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  16. The Sydney University PAPA camera

    NASA Astrophysics Data System (ADS)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  17. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  18. Image quality evaluation of color displays using a Fovean color camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro

    2007-03-01

    This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  19. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  20. Active solution of homography for pavement crack recovery with four laser lines.

    PubMed

    Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao

    2018-05-08

    An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.

  1. What you see is what you get: webcam placement influences perception and social coordination

    PubMed Central

    Thomas, Laura E.; Pemstein, Daniel

    2015-01-01

    Building on a well-established link between elevation and social power, we demonstrate that—when perceptual information is limited—subtle visual cues can shape people’s representations of others and, in turn, alter strategic social behavior. A cue to elevation (unrelated to physical size) provided by the placement of web cameras in a video chat biased individuals’ perceptions of a partner’s height (Experiment 1) and shaped the extent to which they made decisions in their own self-interest: participants tended to coordinate their behavior in a manner that benefitted the preferences of a partner pictured from a low camera angle during a game of asymmetric coordination (Experiment 2). Our results suggest that people are vulnerable to the influence of a limited viewpoint when forming representations of others in a manner that shapes their strategic choices. PMID:25852620

  2. Cartography of the Luna-21 landing site and Lunokhod-2 traverse area based on Lunar Reconnaissance Orbiter Camera images and surface archive TV-panoramas

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kozlova, N. A.; Kokhanov, A. A.; Zubarev, A. E.; Nadezhdina, I. E.; Patratiy, V. D.; Konopikhin, A. A.; Basilevsky, A. T.; Abdrakhimov, A. M.; Oberst, J.; Haase, I.; Jolliff, B. L.; Plescia, J. B.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) system consists of a Wide Angle Camera (WAC) and Narrow Angle Camera (NAC). NAC images (∼0.5 to 1.7 m/pixel) reveal details of the Luna-21 landing site and Lunokhod-2 traverse area. We derived a Digital Elevation Model (DEM) and an orthomosaic for the study region using photogrammetric stereo processing techniques with NAC images. The DEM and mosaic allowed us to analyze the topography and morphology of the landing site area and to map the Lunokhod-2 rover route. The total range of topographic elevation along the traverse was found to be less than 144 m; and the rover encountered slopes of up to 20°. With the orthomosaic tied to the lunar reference frame, we derived coordinates of the Lunokhod-2 landing module and overnight stop points. We identified the exact rover route by following its tracks and determined its total length as 39.16 km, more than was estimated during the mission (37 km), which until recently was a distance record for planetary robotic rovers held for more than 40 years.

  3. Tracking multiple surgical instruments in a near-infrared optical system.

    PubMed

    Cai, Ken; Yang, Rongqian; Lin, Qinyong; Wang, Zhigang

    2016-12-01

    Surgical navigation systems can assist doctors in performing more precise and more efficient surgical procedures to avoid various accidents. The near-infrared optical system (NOS) is an important component of surgical navigation systems. However, several surgical instruments are used during surgery, and effectively tracking all of them is challenging. A stereo matching algorithm using two intersecting lines and surgical instrument codes is proposed in this paper. In our NOS, the markers on the surgical instruments can be captured by two near-infrared cameras. After automatically searching and extracting their subpixel coordinates in the left and right images, the coordinates of the real and pseudo markers are determined by the two intersecting lines. Finally, the pseudo markers are removed to achieve accurate stereo matching by summing the codes for the distances between a specific marker with the other two markers on the surgical instrument. Experimental results show that the markers on the different surgical instruments can be automatically and accurately recognized. The NOS can accurately track multiple surgical instruments.

  4. Inversion of the conical Radon transform with vertices on a surface of revolution arising in an application of a Compton camera

    NASA Astrophysics Data System (ADS)

    Moon, Sunghwan

    2017-06-01

    A Compton camera has been introduced for use in single photon emission computed tomography to improve the low efficiency of a conventional gamma camera. In general, a Compton camera brings about the conical Radon transform. Here we consider a conical Radon transform with the vertices on a rotation symmetric set with respect to a coordinate axis. We show that this conical Radon transform can be decomposed into two transforms: the spherical sectional transform and the weighted fan beam transform. After finding inversion formulas for these two transforms, we provide an inversion formula for the conical Radon transform.

  5. “Ipsilateral, high, single-hand, sideways”—Ruijin rule for camera assistant in uniportal video-assisted thoracoscopic surgery

    PubMed Central

    Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han

    2016-01-01

    Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573

  6. Coregistration of high-resolution Mars orbital images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2015-04-01

    The systematic orbital imaging of the Martian surface started 4 decades ago from NASA's Viking Orbiter 1 & 2 missions, which were launched in August 1975, and acquired orbital images of the planet between 1976 and 1980. The result of this reconnaissance was the first medium-resolution (i.e. ≤ 300m/pixel) global map of Mars, as well as a variety of high-resolution images (reaching up to 8m/pixel) of special regions of interest. Over the last two decades NASA has sent 3 more spacecraft with onboard instruments for high-resolution orbital imaging: Mars Global Surveyor (MGS) having onboard the Mars Orbital Camera - Narrow Angle (MOC-NA), Mars Odyssey having onboard the Thermal Emission Imaging System - Visual (THEMIS-VIS) and the Mars Reconnaissance Orbiter (MRO) having on board two distinct high-resolution cameras, Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE). Moreover, ESA has the multispectral High resolution Stereo Camera (HRSC) onboard ESA's Mars Express with resolution up to 12.5m since 2004. Overall, this set of cameras have acquired more than 400,000 high-resolution images, i.e. with resolution better than 100m and as fine as 25 cm/pixel. Notwithstanding the high spatial resolution of the available NASA orbital products, their accuracy of areo-referencing is often very poor. As a matter of fact, due to pointing inconsistencies, usually form errors in roll attitude, the acquired products may actually image areas tens of kilometers far away from the point that they are supposed to be looking at. On the other hand, since 2004, the ESA Mars Express has been acquiring stereo images through the High Resolution Stereo Camera (HRSC), with resolution that is usually 12.5-25 metres per pixel. The achieved coverage is more than 64% for images with resolution finer than 20 m/pixel, while for ~40% of Mars, Digital Terrain Models (DTMs) have been produced with are co-registered with MOLA [Gwinner et al., 2010]. The HRSC images and DTMs represent the best available 3D reference frame for Mars showing co-registration with MOLA<25m (loc.cit.). In our work, the reference generated by HRSC terrain corrected orthorectified images is used as a common reference frame to co-register all available high-resolution orbital NASA products into a common 3D coordinate system, thus allowing the examination of the changes that happen on the surface of Mars over time (such as seasonal flows [McEwen et al., 2011] or new impact craters [Byrne, et al., 2009]). In order to accomplish such a tedious manual task, we have developed an automatic co-registration pipeline that produces orthorectified versions of the NASA images in realistic time (i.e. from ~15 minutes to 10 hours per image depending on size). In the first step of this pipeline, tie-points are extracted from the target NASA image and the reference HRSC image or image mosaic. Subsequently, the HRSC areo-reference information is used to transform the HRSC tie-points pixel coordinates into 3D "world" coordinates. This way, a correspondence between the pixel coordinates of the target NASA image and the 3D "world" coordinates is established for each tie-point. This set of correspondences is used to estimate a non-rigid, 3D to 2D transformation model, which transforms the target image into the HRSC reference coordinate system. Finally, correlation of the transformed target image and the HRSC image is employed to fine-tune the orthorectification results, thus generating results with sub-pixel accuracy. This method, which has been proven to be accurate, robust to resolution differences and reliable when dealing with partially degraded data and fast, will be presented, along with some example co-registration results that have been achieved by using it. Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] K. F. Gwinner, et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007. [2] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science , 333 (6043): 740-743. [3] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676.

  7. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  8. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  9. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  10. Inter-joint coordination between hips and trunk during downswings: Effects on the clubhead speed.

    PubMed

    Choi, Ahnryul; Lee, In-Kwang; Choi, Mun-Taek; Mun, Joung Hwan

    2016-10-01

    Understanding of the inter-joint coordination between rotational movement of each hip and trunk in golf would provide basic knowledge regarding how the neuromuscular system organises the related joints to perform a successful swing motion. In this study, we evaluated the inter-joint coordination characteristics between rotational movement of the hips and trunk during golf downswings. Twenty-one right-handed male professional golfers were recruited for this study. Infrared cameras were installed to capture the swing motion. The axial rotation angle, angular velocity and inter-joint coordination were calculated by the Euler angle, numerical difference method and continuous relative phase, respectively. A more typical inter-joint coordination demonstrated in the leading hip/trunk than trailing hip/trunk. Three coordination characteristics of the leading hip/trunk reported a significant relationship with clubhead speed at impact (r < -0.5) in male professional golfers. The increased rotation difference between the leading hip and trunk in the overall downswing phase as well as the faster rotation of the leading hip compared to that of the trunk in the early downswing play important roles in increasing clubhead speed. These novel inter-joint coordination strategies have the great potential to use a biomechanical guideline to improve the golf swing performance of unskilled golfers.

  11. Automated Wing Twist And Bending Measurements Under Aerodynamic Load

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Martinson, S. D.

    1996-01-01

    An automated system to measure the change in wing twist and bending under aerodynamic load in a wind tunnel is described. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two dimensional coordinates of wing targets with a fixed (and known) third dimensional coordinate, namely the spanwise location. The measurement technique has been used successfully at the National Transonic Facility, the Transonic Dynamics Tunnel, and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The advantages and limitations (including targeting) of the technique are discussed. A major consideration in the development was that use of the technique must not appreciably reduce wind tunnel productivity.

  12. [Coordination patterns assessed by a continuous measure of joints coupling during upper limb repetitive movements].

    PubMed

    Draicchio, F; Silvetti, A; Ranavolo, A; Iavicoli, S

    2008-01-01

    We analyzed the coordination patterns between elbow, shoulder and trunk in a motor task consisting of reaching out, picking up a cylinder, and transporting it back by using the Dynamical Systems Theory and calculating the continuous relative phase (CRP), a continuous measure of the coupling between two interacting joints. We used an optoelectronic motion analysis system consisting of eight infra-red ray cameras to detect the movements of nine skin-mounted markers. We calculated the root square of the adjusted coefficient of determination, the coefficient of multiple correlation (CMC), in order to investigate the repeatability of the joints coordination. The data confirm that the CNS establishes both synergic (i.e. coupling between shoulder and trunk on the frontal plane) and hierarchical (i.e. coupling between elbow-shoulder-trunk on the horizontal plane) relationships among the available degrees of freedom to overcome the complexity due to motor redundancy. The present study describes a method to investigate the organization of the kinematic degrees of freedom during upper limb multi-joint motor tasks that can be useful to assess upper limb repetitive movements.

  13. A panning DLT procedure for three-dimensional videography.

    PubMed

    Yu, B; Koh, T J; Hay, J G

    1993-06-01

    The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.

  14. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  15. Use of the Remote Access Virtual Environment Network (RAVEN) for coordinated IVA-EVA astronaut training and evaluation.

    PubMed

    Cater, J P; Huffman, S D

    1995-01-01

    This paper presents a unique virtual reality training and assessment tool developed under a NASA grant, "Research in Human Factors Aspects of Enhanced Virtual Environments for Extravehicular Activity (EVA) Training and Simulation." The Remote Access Virtual Environment Network (RAVEN) was created to train and evaluate the verbal, mental and physical coordination required between the intravehicular (IVA) astronaut operating the Remote Manipulator System (RMS) arm and the EVA astronaut standing in foot restraints on the end of the RMS. The RAVEN system currently allows the EVA astronaut to approach the Hubble Space Telescope (HST) under control of the IVA astronaut and grasp, remove, and replace the Wide Field Planetary Camera drawer from its location in the HST. Two viewpoints, one stereoscopic and one monoscopic, were created all linked by Ethernet, that provided the two trainees with the appropriate training environments.

  16. Single exposure three-dimensional imaging of dusty plasma clusters.

    PubMed

    Hartmann, Peter; Donkó, István; Donkó, Zoltán

    2013-02-01

    We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.

  17. Calibration of Low Cost Digital Camera Using Data from Simultaneous LIDAR and Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.

    2012-07-01

    Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are used. The mathematic correlation of the interior and exterior orientation parameters are analyzed and discussed. The accuracies of the calibration experiments are, as well, analyzed and discussed.

  18. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  19. A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit

    PubMed Central

    Dang, Quoc Khanh; Chee, Youngjoon; Pham, Duy Duong; Suh, Young Soo

    2016-01-01

    A virtual blind cane system for indoor application, including a camera, a line laser and an inertial measurement unit (IMU), is proposed in this paper. Working as a blind cane, the proposed system helps a blind person find the type of obstacle and the distance to it. The distance from the user to the obstacle is estimated by extracting the laser coordinate points on the obstacle, as well as tracking the system pointing angle. The paper provides a simple method to classify the obstacle’s type by analyzing the laser intersection histogram. Real experimental results are presented to show the validity and accuracy of the proposed system. PMID:26771618

  20. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  1. ManPortable and UGV LIVAR: advances in sensor suite integration bring improvements to target observation and identification for the electronic battlefield

    NASA Astrophysics Data System (ADS)

    Lynam, Jeff R.

    2001-09-01

    A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.

  2. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.

  3. Rigorous accuracy assessment for 3D reconstruction using time-series Dual Fluoroscopy (DF) image pairs

    NASA Astrophysics Data System (ADS)

    Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet

    2017-06-01

    High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).

  4. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a template in processing terrain images. During operation on terrain, the images acquired by the left and right cameras are analyzed. The analysis includes (1) computation of the horizontal and vertical dimensions and the aspect ratios of rectangles that bound the circle images and (2) comparison of these aspect ratios with those of the template. Coordinates of distortions of the circles are used to identify and locate objects. If the analysis leads to identification of an object of significant size, then stereoscopicvision algorithms are used to estimate the distance to the object. The time taken in performing this analysis on a single pair of images acquired by the left and right cameras in this system is a fraction of the time taken in processing the many pairs of images acquired in a sweep of the laser stripe across the field of view in the prior system. The results of the analysis include data on sizes and shapes of, and distances and directions to, objects. Coordinates of objects are updated as the vehicle moves so that intelligent decisions regarding speed and direction can be made. The results of the analysis are utilized in a computational decision-making process that generates obstacle-avoidance data and feeds those data to the control system of the robotic vehicle.

  5. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units.

    PubMed

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction).

  6. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units

    PubMed Central

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction). PMID:28352243

  7. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  8. Using Stars to Align a Steered Laser System for Cosmic Ray Simulation

    NASA Astrophysics Data System (ADS)

    Krantz, Harry; Wiencke, Lawrence

    2016-03-01

    Ultra high energy cosmic rays (UHECRs) are the highest energy cosmic particles with kinetic energy above 1018eV . UHECRs are detected from the air shower of secondary particles and UV florescence that results from interaction with the atmosphere. A high power UV laser beam can be used to simulate the optical signature of a UHCER air shower. The Global Light System (GLS) is a planned network of ground-based light sources including lasers to support the planned space-based Extreme Universe Space Observatory (EUSO). A portable prototype GLS laser station has been constructed at the Colorado School of Mines. Currently the laser system uses reference targets on the ground but stars can be used to better align the beam by providing a complete hemisphere of targets. In this work, a CCD camera is used to capture images of known stars through the steering head optics. The images are analyzed to find the steering head coordinates of the target star. The true coordinates of the star are calculated from the location and time of observation. A universal adjustment for the steering head is determined from the differences between the two pairs of coordinates across multiple stars. This laser system prototype will also be used for preflight tests of the ESUO Super Pressure Balloon mission.

  9. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  10. Instrumentation of LOTIS: Livermore Optical Transient Imaging System; a fully automated wide field of view telescope system searching for simultaneous optical counterparts of gamma ray bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, H.S.; Ables, E.; Barthelmy, S.D.

    LOTIS is a rapidly slewing wide-field-of-view telescope which was designed and constructed to search for simultaneous gamma-ray burst (GRB) optical counterparts. This experiment requires a rapidly slewing ({lt} 10 sec), wide-field-of-view ({gt} 15{degrees}), automatic and dedicated telescope. LOTIS utilizes commercial tele-photo lenses and custom 2048 x 2048 CCD cameras to view a 17.6 x 17.6{degrees} field of view. It can point to any part of the sky within 5 sec and is fully automated. It is connected via Internet socket to the GRB coordinate distribution network which analyzes telemetry from the satellite and delivers GRB coordinate information in real-time. LOTISmore » started routine operation in Oct. 1996. In the idle time between GRB triggers, LOTIS systematically surveys the entire available sky every night for new optical transients. This paper will describe the system design and performance.« less

  11. Modular optical topometric sensor for 3D acquisition of human body surfaces and long-term monitoring of variations.

    PubMed

    Bischoff, Guido; Böröcz, Zoltan; Proll, Christian; Kleinheinz, Johannes; von Bally, Gert; Dirksen, Dieter

    2007-08-01

    Optical topometric 3D sensors such as laser scanners and fringe projection systems allow detailed digital acquisition of human body surfaces. For many medical applications, however, not only the current shape is important, but also its changes, e.g., in the course of surgical treatment. In such cases, time delays of several months between subsequent measurements frequently occur. A modular 3D coordinate measuring system based on the fringe projection technique is presented that allows 3D coordinate acquisition including calibrated color information, as well as the detection and visualization of deviations between subsequent measurements. In addition, parameters describing the symmetry of body structures are determined. The quantitative results of the analysis may be used as a basis for objective documentation of surgical therapy. The system is designed in a modular way, and thus, depending on the object of investigation, two or three cameras with different capabilities in terms of resolution and color reproduction can be utilized to optimize the set-up.

  12. Photogrammetry Toolbox Reference Manual

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Burner, Alpheus W.

    2014-01-01

    Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.

  13. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  14. Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter

    NASA Astrophysics Data System (ADS)

    Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.

    1991-06-01

    We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.

  15. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  16. Construct and face validity of a virtual reality-based camera navigation curriculum.

    PubMed

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  18. SU-F-BRB-05: Collision Avoidance Mapping Using Consumer 3D Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R

    2015-06-15

    Purpose: To develop a fast and economical method of scanning a patient’s full body contour for use in collision avoidance mapping without the use of ionizing radiation. Methods: Two consumer level 3D cameras used in electronic gaming were placed in a CT simulator room to scan a phantom patient set up in a high collision probability position. A registration pattern and computer vision algorithms were used to transform the scan into the appropriate coordinate systems. The cameras were then used to scan the surface of a gantry in the treatment vault. Each scan was converted into a polygon mesh formore » collision testing in a general purpose polygon interference algorithm. All clinically relevant transforms were applied to the gantry and patient support to create a map of all possible collisions. The map was then tested for accuracy by physically testing the collisions with the phantom in the vault. Results: The scanning fidelity of both the gantry and patient was sufficient to produce a collision prediction accuracy of 97.1% with 64620 geometry states tested in 11.5 s. The total scanning time including computation, transformation, and generation was 22.3 seconds. Conclusion: Our results demonstrate an economical system to generate collision avoidance maps. Future work includes testing the speed of the framework in real-time collision avoidance scenarios. Research partially supported by a grant from Varian Medical Systems.« less

  19. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  20. A probabilistic model of overt visual attention for cognitive robots.

    PubMed

    Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G

    2010-10-01

    Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.

  1. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  2. Optimization of Close Range Photogrammetry Network Design Applying Fuzzy Computation

    NASA Astrophysics Data System (ADS)

    Aminia, A. S.

    2017-09-01

    Measuring object 3D coordinates with optimum accuracy is one of the most important issues in close range photogrammetry. In this context, network design plays an important role in determination of optimum position of imaging stations. This is, however, not a trivial task due to various geometric and radiometric constraints affecting the quality of the measurement network. As a result, most camera stations in the network are defined on a try and error basis based on the user's experience and generic network concept. In this paper, we propose a post-processing task to investigate the quality of camera positions right after image capturing to achieve the best result. To do this, a new fuzzy reasoning approach is adopted, in which the constraints affecting the network design are all modeled. As a result, the position of all camera locations is defined based on fuzzy rules and inappropriate stations are determined. The experiments carried out show that after determination and elimination of the inappropriate images using the proposed fuzzy reasoning system, the accuracy of measurements is improved and enhanced about 17% for the latter network.

  3. A panoramic coded aperture gamma camera for radioactive hotspots localization

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  4. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  5. Active learning in camera calibration through vision measurement application

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang

    2017-08-01

    Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.

  6. Nuclear medicine image registration by spatially noncoherent interferometry.

    PubMed

    Scheiber, C; Malet, Y; Sirat, G; Grucker, D

    2000-02-01

    This article introduces a technique for obtaining high-resolution body contour data in the same coordinate frame as that of a rotating gamma camera, using a miniature range finder, the conoscope, mounted on the camera gantry. One potential application of the technique is accurate coregistration in longitudinal brain SPECT studies, using the face of the patient (or "mask"), instead of SPECT slices, to coregister subsequent acquisitions involving the brain. Conoscopic holography is an interferometry technique that relies on spatially incoherent light interference in birefringent crystals. In this study, the conoscope was used to measure the absolute distance (Z) between a light source reflected from the skin and its observation plane. This light was emitted by a 0.2-mW laser diode. A scanning system was used to image the face during SPECT acquisition. The system consisted of a motor-driven mirror (Y axis) and the gamma-camera gantry (1 profile was obtained for each rotation step, X axis). The system was calibrated to place the conoscopic measurements and SPECT slices in the same coordinate frame. Through a simple and robust calibration of the system, the SE for measurements performed on geometric shapes was less than 2 mm, i.e., less than the actual pixel size of the SPECT data. Biometric measurements of an anthropomorphic brain phantom were within 3%-5% of actual values. The mask data were used to register images of a brain phantom and of a volunteer's brain, respectively. The rigid transformation that allowed the merging of masks by visual inspection was applied to the 2 sets of SPECT slices to perform the fusion of the data. At the cost of an additional low-cost setup integrated into the gamma-camera gantry, real-time data about the surface of the head were obtained. As in all other surface-based techniques (as opposed to volume-based techniques), this method allows the match of data independently from the dataset of interest and facilitates further registration of data from any other source. The main advantage of this technique compared with other optically based methods is the robustness of the calibration procedure and the compactness of the sensor as a result of the colinearity of the projected beam and the reflected (diffused) beams of the conoscope. Taking into account the experimental nature of this preliminary work, significant improvements in the accuracy and speed of measurements (up to 1000 points/s) are expected.

  7. The Video Collaborative Localization of a Miner's Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines.

    PubMed

    You, Kaiming; Yang, Wei; Han, Ruisong

    2015-09-29

    Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner's lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner's lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner's lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner's lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels.

  8. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    NASA Astrophysics Data System (ADS)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  9. Calibration method for video and radiation imagers

    DOEpatents

    Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN

    2011-07-05

    The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.

  10. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  11. Field test studies of our infrared-based human temperature screening system embedded with a parallel measurement approach

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun; Chaitavon, Kosom

    2009-07-01

    This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.

  12. TH-AB-202-11: Spatial and Rotational Quality Assurance of 6DOF Patient Tracking Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, AH; Liu, X; Grelewicz, Z

    2016-06-15

    Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations (6DOF). In this work, we develop a novel technique to evaluate the 6DOF performance of external motion tracking systems. We apply this methodology to an infrared (IR) marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to follow input trajectories with sub-millimeter and sub-degree accuracy. The 6DOF positions of the robotic system were then tracked and recordedmore » independently by three optical camera systems. A calibration methodology which associates the motion phantom and camera coordinate frames was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20×20×16 mm and 5×5×5 degree workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the IR marker tracking system to have maximal root mean square error (RMSE) values of 0.25 mm translationally and 0.09 degrees rotationally, in any one axis, comparing intended 6DOF positions to positions measured by the IR camera. The 6DOF RSME discrepancy for the first 3D optical surface tracking unit yielded maximal values of 0.60 mm and 0.11 degrees over the same 6DOF volume. An earlier generation 3D optical surface tracker was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.74 mm and 0.28 degrees within the same 6DOF evaluation space. Conclusion: The proposed technique was effective at evaluating the performance of 6DOF patient tracking systems. All systems examined exhibited tracking capabilities at the sub-millimeter and sub-degree level within a 6DOF workspace.« less

  13. Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system

    NASA Astrophysics Data System (ADS)

    Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik

    2012-01-01

    This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.

  14. Online, offline, realtime: recent developments in industrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Boesemann, Werner

    2003-01-01

    In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology. The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturer to develop hard- and software solutions to meet these requirements. The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies, and algorithmic and software aspects like automation strategies or new camera models. The basic qualities of digital photogrammetry- like portability and flexibility on one hand and fully automated quality control on the other - sometimes lead to certain conflicts in the design of measurement systems for different online, offline, or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.

  15. Photogrammetry in the line: recent developments in industrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Boesemann, Werner

    2003-05-01

    In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology. The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturers to develop hard- and software solutions to meet these requirements. The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies,and algorithmic and software aspects like automation strategies or new camera models. The basic qualities of digital photogrammetry-like portability and flexibility on one hand and fully automated quality control on the other -- sometimes lead to certain conflicts in the design of measurement systems for different online, offline or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.

  16. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers.

    PubMed

    Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P

    2017-01-01

    Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.

  17. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  18. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef

    2013-12-01

    Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  19. Airborne multicamera system for geo-spatial applications

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic; Kulkarni, Rahul R.; Lyle, Stacey; Steidley, Carl W.

    2003-08-01

    Airborne remote sensing has many applications that include vegetation detection, oceanography, marine biology, geographical information systems, and environmental coastal science analysis. Remotely sensed images, for example, can be used to study the aftermath of episodic events such as the hurricanes and floods that occur year round in the coastal bend area of Corpus Christi. This paper describes an Airborne Multi-Spectral Imaging System that uses digital cameras to provide high resolution at very high rates. The software is based on Delphi 5.0 and IC Imaging Control's ActiveX controls. Both time and the GPS coordinates are recorded. Three successful test flights have been conducted so far. The paper present flight test results and discusses the issues being addressed to fully develop the system.

  20. VizieR Online Data Catalog: Spectral types of stars in CoRoT fields (Sebastian+, 2012)

    NASA Astrophysics Data System (ADS)

    Sebastian, D.; Guenther, E. W.; Schaffenroth, V.; Gandolfi, D.; Geier, S.; Heber, U.; Deleuil, M.; Moutou, C.

    2012-03-01

    Spectroscopic classification for 2950 O-, B-, and A-type stars in the CoRoT-fields IRa01, LRa01, and LRa02. Stars are named by their CoRoT-identifier and Coordinates are given. The visual magnitudes were obtained with the Wide Field Camera filter-system of the Isaac Newton Telescope at Roque de los Muchachos Observatory on La Palma and can be converted into Landolt standards, as shown in Deleuil et al. (2009AJ....138..649D). (1 data file).

  1. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  2. Development and calibration of an accurate 6-degree-of-freedom measurement system with total station

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui

    2016-12-01

    To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.

  3. Coordination and variability in the elite female tennis serve.

    PubMed

    Whiteside, David; Elliott, Bruce Clifford; Lay, Brendan; Reid, Machar

    2015-01-01

    Enhancing the understanding of coordination and variability in the tennis serve may be of interest to coaches as they work with players to improve performance. The current study examined coordinated joint rotations and variability in the lower limbs, trunk, serving arm and ball location in the elite female tennis serve. Pre-pubescent, pubescent and adult players performed maximal effort flat serves while a 22-camera 500 Hz motion analysis system captured three-dimensional body kinematics. Coordinated joint rotations in the lower limbs and trunk appeared most consistent at the time players left the ground, suggesting that they coordinate the proximal elements of the kinematic chain to ensure that they leave the ground at a consistent time, in a consistent posture. Variability in the two degrees of freedom at the elbow became significantly greater closer to impact in adults, possibly illustrating the mechanical adjustments (compensation) these players employed to manage the changing impact location from serve to serve. Despite the variable ball toss, the temporal composition of the serve was highly consistent and supports previous assertions that players use the location of the ball to regulate their movement. Future work should consider these associations in other populations, while coaches may use the current findings to improve female serve performance.

  4. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  5. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  6. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  7. Dual multispectral and 3D structured light laparoscope

    NASA Astrophysics Data System (ADS)

    Clancy, Neil T.; Lin, Jianyu; Arya, Shobhit; Hanna, George B.; Elson, Daniel S.

    2015-03-01

    Intraoperative feedback on tissue function, such as blood volume and oxygenation would be useful to the surgeon in cases where current clinical practice relies on subjective measures, such as identification of ischaemic bowel or tissue viability during anastomosis formation. Also, tissue surface profiling may be used to detect and identify certain pathologies, as well as diagnosing aspects of tissue health such as gut motility. In this paper a dual modality laparoscopic system is presented that combines multispectral reflectance and 3D surface imaging. White light illumination from a xenon source is detected by a laparoscope-mounted fast filter wheel camera to assemble a multispectral image (MSI) cube. Surface shape is then calculated using a spectrally-encoded structured light (SL) pattern detected by the same camera and triangulated using an active stereo technique. Images of porcine small bowel were acquired during open surgery. Tissue reflectance spectra were acquired and blood volume was calculated at each spatial pixel across the bowel wall and mesentery. SL features were segmented and identified using a `normalised cut' algoritm and the colour vector of each spot. Using the 3D geometry defined by the camera coordinate system the multispectral data could be overlaid onto the surface mesh. Dual MSI and SL imaging has the potential to provide augmented views to the surgeon supplying diagnostic information related to blood supply health and organ function. Future work on this system will include filter optimisation to reduce noise in tissue optical property measurement, and minimise spot identification errors in the SL pattern.

  8. Virtual Reality Glasses and "Eye-Hands Blind Technique" for Microsurgical Training in Neurosurgery.

    PubMed

    Choque-Velasquez, Joham; Colasanti, Roberto; Collan, Juhani; Kinnunen, Riina; Rezai Jahromi, Behnam; Hernesniemi, Juha

    2018-04-01

    Microsurgical skills and eye-hand coordination need continuous training to be developed and refined. However, well-equipped microsurgical laboratories are not so widespread as their setup is expensive. Herein, we present a novel microsurgical training system that requires a high-resolution personal computer screen, smartphones, and virtual reality glasses. A smartphone placed on a holder at a height of about 15-20 cm from the surgical target field is used as the webcam of the computer. A specific software is used to duplicate the video camera image. The video may be transferred from the computer to another smartphone, which may be connected to virtual reality glasses. Using the previously described training model, we progressively performed more and more complex microsurgical exercises. It did not take long to set up our system, thus saving time for the training sessions. Our proposed training model may represent an affordable and efficient system to improve eye-hand coordination and dexterity in using not only the operating microscope but also endoscopes and exoscopes. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  10. Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition

    DTIC Science & Technology

    1992-09-01

    correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras

  11. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  12. The Video Collaborative Localization of a Miner’s Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines

    PubMed Central

    You, Kaiming; Yang, Wei; Han, Ruisong

    2015-01-01

    Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner’s lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner’s lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner’s lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner’s lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels. PMID:26426023

  13. The integration of astro-geodetic data observed with ACSYS to the local geoid models Istanbul-Turkey

    NASA Astrophysics Data System (ADS)

    Halicioglu, Kerem; Ozludemir, M. Tevfik; Deniz, Rasim; Ozener, Haluk; Albayrak, Muge; Ulug, Rasit; Basoglu, Burak

    2017-04-01

    Astro-geodetic deflections of the vertical components provide accurate and valuable information of Earth's gravity filed. Conventional methods require considerable effort and time whereas new methods, namely digital zenith camera systems (DZCS), have been designed to eliminate drawbacks of the conventional methods, such as observer dependent errors, long observation times, and to improve the observation accuracy. The observation principle is based on capturing star images near zenithal direction to determine astronomical coordinates of the station point with the integration of CCD, telescope, tiltmeters, and GNSS devices. In Turkey a new DZCS have been designed and tested on control network located in Istanbul, of which the geoid height differences were known with the accuracy of ±3.5 cm. Astro-geodetic Camera System (ACSYS) was used to determine the deflections of the vertical components with an accuracy of ±0.1 - 0.3 arc seconds, and results were compared with geoid height differences using astronomical levelling procedure. The results have also been compared with the ones calculated from global geopotential models. In this study the recent results of the first digital zenith camera system of Turkey are presented and the future studies are introduced such as the current developments of the system including hardware and software upgrades as well as the new observation strategy of the ACSYS. We also discuss the contribution and integration of the astro-geodetic deflections of the vertical components to the geoid determination studies in the light of information of current ongoing projects being operated in Turkey.

  14. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  15. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  16. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  17. Approach for Improving the Integrated Sensor Orientation

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Ercolin Filho, L.; Graça, N.; Centeno, J.

    2016-06-01

    The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the in situ camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AVTM system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.

  18. The Kinect as an interventional tracking system

    NASA Astrophysics Data System (ADS)

    Wang, Xiang L.; Stolka, Philipp J.; Boctor, Emad; Hager, Gregory; Choti, Michael

    2012-02-01

    This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability limits, we describe a medical needle-tracking system for interventional applications based on the Kinect sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of freedom, and provide information about the most likely candidate.

  19. A verification and errors analysis of the model for object positioning based on binocular stereo vision for airport surface surveillance

    NASA Astrophysics Data System (ADS)

    Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun

    2014-12-01

    A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).

  20. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  1. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  2. Topographic Map of Pathfinder Landing Site

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Topographic map of the landing site, to a distance of 60 meters from the lander in the LSC coordinate system. The lander is shown schematically in the center; 2.5 meter radius circle (black) centered on the camera was not mapped. Gentle relief [root mean square (rms) elevation variation 0.5 m; rms a directional slope 4O] and organization of topography into northwest and northeast-trending ridges about 20 meters apart are apparent. Roughly 30% of the illustrated area is hidden from the camera behind these ridges. Contours (0.2 m interval) and color coding of elevations were generated from a digital terrain model, which was interpolated by kriging from approximately 700 measured points. Angular and parallax point coordinates were measured manually on a large (5 m length) anaglyphic uncontrolled mosaic and used to calculate Cartesian (LSC) coordinates. Errors in azimuth on the order of 10 are therefore likely; elevation errors were minimized by referencing elevations to the local horizon. The uncertainty in range measurements increases quadratically with range. Given a measurement error of 1/2 pixel, the expected precision in range is 0.3 meter at 10 meter range, and 10 meters at 60 meter range. Repeated measurements were made, compared, and edited for consistency to improve the range precision. Systematic errors undoubtedly remain and will be corrected in future maps compiled digitally from geometrically controlled images. Cartographic processing by U.S. Geological Survey.

    NOTE: original caption as published in Science Magazine

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

  3. The Photogrammetry Cube

    NASA Technical Reports Server (NTRS)

    2008-01-01

    We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.

  4. Current status of Polish Fireball Network

    NASA Astrophysics Data System (ADS)

    Wiśniewski, M.; Żołądek, P.; Olech, A.; Tyminski, Z.; Maciejewski, M.; Fietkiewicz, K.; Rudawska, R.; Gozdalski, M.; Gawroński, M. P.; Suchodolski, T.; Myszkiewicz, M.; Stolarz, M.; Polakowski, K.

    2017-09-01

    The Polish Fireball Network (PFN) is a project to monitor regularly the sky over Poland in order to detect bright fireballs. In 2016 the PFN consisted of 36 continuously active stations with 57 sensitive analogue video cameras and 7 high resolution digital cameras. In our observations we also use spectroscopic and radio techniques. A PyFN software package for trajectory and orbit determination was developed. The PFN project is an example of successful participation of amateur astronomers who can provide valuable scientific data. The network is coordinated by astronomers from Copernicus Astronomical Centre in Warsaw, Poland. In 2011-2015 the PFN cameras recorded 214,936 meteor events. Using the PFN data and the UFOOrbit software 34,609 trajectories and orbits were calculated. In the following years we are planning intensive modernization of the PFN network including installation of dozens of new digital cameras.

  5. Pilot implementation of a technologically advanced system for the optimization of pre-hospital, trauma patient care.

    PubMed

    Vagianos, Constantine E; Dimopoulou, Efi; Tsiftsis, Dimitrios; Spyropoulos, Charalambos; Spyrakopoulos, Panagiotis; Vagenas, Konstantinos

    2010-07-01

    Cooperation between medical informatics, wireless communication and pre-hospital emergency services is essential for the optimal pre-hospital patient treatment. The use of technological innovations improves medical care in the pre-hospital setting with regard to the organization of an integrated center, which coordinates all parties involved for the patient's best interest. A dispatch center was developed in the city of Patras, in southwestern Greece, equipped with a Geographic Information System (GIS), which immediately points out the location of emergency vehicles (EVs) on a digital map depicting the city plan. Additionally, three ambulances of the National Center of Immediate Aid (NCIA) were equipped with a decentralized traffic management system for the vehicle's traffic priority at signaled junctions. The system consisted of a cellular-based (GSM) telemedicine module, a Global Positioning System (GPS) and a web camera system in the vehicle cabin. The aforementioned system provided considerable assistance to the pre-hospital treatment first by selecting the ambulance closest to the accident's location and then by pinpointing the optimum route to the hospital, thus significantly reducing the overall transportation time. The project's objective to coordinate emergency hospital departments involved in the treatment of trauma patients with other emergency services by utilizing high technology was achieved within this interdisciplinary effort.

  6. Line of sight pointing technology for laser communication system between aircrafts

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Liu, Yunqing; Song, Yansong

    2017-12-01

    In space optical communications, it is important to obtain the most efficient performance of line of sight (LOS) pointing system. The errors of position (latitude, longitude, and altitude), attitude angles (pitch, yaw, and roll), and installation angle among a different coordinates system are usually ineluctable when assembling and running an aircraft optical communication terminal. These errors would lead to pointing errors and make it difficult for the LOS system to point to its terminal to establish a communication link. The LOS pointing technology of an aircraft optical communication system has been researched using a transformation matrix between the coordinate systems of two aircraft terminals. A method of LOS calibration has been proposed to reduce the pointing error. In a flight test, a successful 144-km link was established between two aircrafts. The position and attitude angles of the aircraft have been obtained to calculate the pointing angle in azimuth and elevation provided by using a double-antenna GPS/INS system. The size of the field of uncertainty (FOU) and the pointing accuracy are analyzed based on error theory, and it has been also measured using an observation camera installed next to the optical LOS. Our results show that the FOU of aircraft optical communications is 10 mrad without a filter, which is the foundation to acquisition strategy and scanning time.

  7. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  8. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  9. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  10. Dual Oculometer System for Aviation Crew Assessment

    NASA Technical Reports Server (NTRS)

    Latorella, Kara; Ellis, Kyle K.; Lynn, William A.; Frasca, Dennis; Burdette, Daniel W.; Feigh, Charles T.; Douglas, Alan L.

    2010-01-01

    Oculometers, eye trackers, are a useful tool for ascertaining the manner in which pilots deploy visual attentional resources, and for assessing the degree to which stimuli capture attention exogenously. The aim of this effort was to obtain oculometer data comfortably, unobtrusively, reliably and with good spatial resolution over a standard B757-like flight deck for both individuals in a crew. We chose to implement two remote, 5-camera Smarteye systems which were crafted for this purpose to operate harmoniously. We present here the results of validation exercises, lessons learned for improving data quality, and initial thoughts on the use of paired oculometer data to reflect crew workload, coordination, and situation awareness, in the aggregate.

  11. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  12. Analysis of straw row in the image to control the trajectory of the agricultural combine harvester

    NASA Astrophysics Data System (ADS)

    Shkanaev, Aleksandr Yurievich; Polevoy, Dmitry Valerevich; Panchenko, Aleksei Vladimirovich; Krokhina, Darya Alekseevna; Nailevish, Sadekov Rinat

    2018-04-01

    The paper proposes a solution to the automatic operation of the combine harvester along the straw rows by means of the images from the camera, installed in the cab of the harvester. The U-Net is used to recognize straw rows in the image. The edges of the row are approximated in the segmented image by the curved lines and further converted into the harvester coordinate system for the automatic operating system. The "new" network architecture and approaches to the row approximation has improved the quality of the recognition task and the processing speed of the frames up to 96% and 7.5 fps, respectively. Keywords: Grain harvester,

  13. A CAD/CAE analysis of photographic and engineering data

    NASA Technical Reports Server (NTRS)

    Goza, S. Michael; Peterson, Wayne L.

    1987-01-01

    In the investigation of the STS 51L accident, NASA engineers were given the task of visual analysis of photographic data extracted from the tracking cameras located at the launch pad. An analysis of the rotations associated with the right Solid Rocket Booster (SRB) was also performed. The visual analysis involved pinpointing coordinates of specific areas on the photographs. The objective of the analysis on the right SRB was to duplicate the rotations provided by the SRB rate gyros and to determine the effects of the rotations on the launch configuration. To accomplish the objectives, computer aided design and engineering was employed. The solid modeler, GEOMOD, inside the Structural Dynamics Research Corp. I-DEAS package, proved invaluable. The problem areas that were encountered and the corresponding solutions that were obtained are discussed. A brief description detailing the construction of the computer generated solid model of the STS launch configuration is given. A discussion of the coordinate systems used in the analysis is provided for the purpose of positioning the model in coordinate space. The techniques and theory used in the model analysis are described.

  14. A fiber optic probe coupled low-cost CMOS-camera-based system for simultaneous measurement of oxy-, deoxyhemoglobin, and blood flow

    NASA Astrophysics Data System (ADS)

    Seong, Myeongsu; Phillips, Zephaniah; Mai, Phuong M.; Yeo, Chaebeom; Song, Cheol; Lee, Kijoon; Kim, Jae G.

    2015-07-01

    Appropriate oxygen supply and blood flow are important in coordination of body functions and maintaining a life. To measure both oxygen supply and blood flow simultaneously, we developed a system that combined near-infrared spectroscopy (NIRS) and diffuse speckle contrast analysis (DSCA). Our system is more cost effective and compact than such combined systems as diffuse correlation spectroscopy(DCS)-NIRS or DCS flow oximeter, and also offers the same quantitative information. In this article, we present the configuration of DSCA-NIRS and preliminary data from an arm cuff occlusion and a repeated gripping exercise. With further investigation, we believe that DSCA-NIRS can be a useful tool for the field of neuroscience, muscle physiology and metabolic diseases such as diabetes.

  15. D Scanning of Live Pigs System and its Application in Body Measurements

    NASA Astrophysics Data System (ADS)

    Guo, H.; Wang, K.; Su, W.; Zhu, D. H.; Liu, W. L.; Xing, Ch.; Chen, Z. R.

    2017-09-01

    The shape of a live pig is an important indicator of its health and value, whether for breeding or for carcass quality. This paper implements a prototype system for live single pig body surface 3d scanning based on two consumer depth cameras, utilizing the 3d point clouds data. These cameras are calibrated in advance to have a common coordinate system. The live 3D point clouds stream of moving single pig is obtained by two Xtion Pro Live sensors from different viewpoints simultaneously. A novel detection method is proposed and applied to automatically detect the frames containing pigs with the correct posture from the point clouds stream, according to the geometric characteristics of pig's shape. The proposed method is incorporated in a hybrid scheme, that serves as the preprocessing step in a body measurements framework for pigs. Experimental results show the portability of our scanning system and effectiveness of our detection method. Furthermore, an updated this point cloud preprocessing software for livestock body measurements can be downloaded freely from https://github.com/LiveStockShapeAnalysis to livestock industry, research community and can be used for monitoring livestock growth status.

  16. A Robotic Platform for Corn Seedling Morphological Traits Characterization

    PubMed Central

    Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu

    2017-01-01

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892

  17. A Robotic Platform for Corn Seedling Morphological Traits Characterization.

    PubMed

    Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu

    2017-09-12

    Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.

  18. A single camera roentgen stereophotogrammetry method for static displacement analysis.

    PubMed

    Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G

    2000-06-01

    A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.

  19. Accuracy and Precision of a Surgical Navigation System: Effect of Camera and Patient Tracker Position and Number of Active Markers

    PubMed Central

    Gundle, Kenneth R.; White, Jedediah K.; Conrad, Ernest U.; Ching, Randal P.

    2017-01-01

    Introduction: Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Materials and Methods: Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Results: Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97) Conclusion: In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system. PMID:28694888

  20. Determination of Exterior Orientation Parameters Through Direct Geo-Referencing in a Real-Time Aerial Monitoring System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Lee, J.; Choi, K.; Lee, I.

    2012-07-01

    Rapid responses for emergency situations such as natural disasters or accidents often require geo-spatial information describing the on-going status of the affected area. Such geo-spatial information can be promptly acquired by a manned or unmanned aerial vehicle based multi-sensor system that can monitor the emergent situations in near real-time from the air using several kinds of sensors. Thus, we are in progress of developing such a real-time aerial monitoring system (RAMS) consisting of both aerial and ground segments. The aerial segment acquires the sensory data about the target areas by a low-altitude helicopter system equipped with sensors such as a digital camera and a GPS/IMU system and transmits them to the ground segment through a RF link in real-time. The ground segment, which is a deployable ground station installed on a truck, receives the sensory data and rapidly processes them to generate ortho-images, DEMs, etc. In order to generate geo-spatial information, in this system, exterior orientation parameters (EOP) of the acquired images are obtained through direct geo-referencing because it is difficult to acquire coordinates of ground points in disaster area. The main process, since the data acquisition stage until the measurement of EOP, is discussed as follows. First, at the time of data acquisition, image acquisition time synchronized by GPS time is recorded as part of image file name. Second, the acquired data are then transmitted to the ground segment in real-time. Third, by processing software for ground segment, positions/attitudes of acquired images are calculated through a linear interpolation using the GPS time of the received position/attitude data and images. Finally, the EOPs of images are obtained from position/attitude data by deriving the relationships between a camera coordinate system and a GPS/IMU coordinate system. In this study, we evaluated the accuracy of the EOP decided by direct geo-referencing in our system. To perform this, we used the precisely calculated EOP through the digital photogrammetry workstation (DPW) as reference data. The results of the evaluation indicate that the accuracy of the EOP acquired by our system is reasonable in comparison with the performance of GPS/IMU system. Also our system can acquire precise multi-sensory data to generate the geo-spatial information in emergency situations. In the near future, we plan to complete the development of the rapid generation system of the ground segment. Our system is expected to be able to acquire the ortho-image and DEM on the damaged area in near real-time. Its performance along with the accuracy of the generated geo-spatial information will also be evaluated and reported in the future work.

  1. The collision forces and lower-extremity inter-joint coordination during running.

    PubMed

    Wang, Li-I; Gu, Chin-Yi; Wang, I-Lin; Siao, Sheng-Wun; Chen, Szu-Ting

    2018-06-01

    The purpose of this study was to compare the lower extremity inter-joint coordination of different collision forces runners during running braking phase. A dynamical system approach was used to analyse the inter-joint coordination parameters. Data were collected with six infra-red cameras and two force plates. According to the impact peak of the vertical ground reaction force, twenty habitually rearfoot-strike runners were categorised into three groups: high collision forces runners (HF group, n = 8), medium collision forces runners (MF group, n = 5), and low collision forces runners (LF group, n = 7). There were no significant differences among the three groups in the ankle and knee joint angle upon landing and in the running velocity (p > 0.05). The HF group produced significantly smaller deviation phase (DP) of the hip flexion/extension-knee flexion/extension during the braking phase compared with the MF and LF groups (p < 0.05). The DP of the hip flexion/extension-knee flexion/extension during the braking phase correlated negatively with the collision force (p < 0.05). The disparities regarding the flexibility of lower extremity inter-joint coordination were found in high collision forces runners. The efforts of the inter-joint coordination and the risk of running injuries need to be clarified further.

  2. Characterization of a small CsI(Na)-WSF-SiPM gamma camera prototype using 99mTc

    NASA Astrophysics Data System (ADS)

    Castro, I. F.; Soares, A. J.; Moutinho, L. M.; Ferreira, M. A.; Ferreira, R.; Combo, A.; Muchacho, F.; Veloso, J. F. C. A.

    2013-03-01

    A small field of view gamma camera is being developed, aiming for applications in scintimammography, sentinel lymph node detection or small animal imaging and research. The proposed wavelength-shifting fibre (WSF) gamma camera consists of two perpendicular sets of WSFs covering both sides of a CsI(Na) crystal, such that the fibres positioned at the bottom of the crystal provide the x coordinate and the ones on top the y coordinate of the gamma photon interaction point. The 2D position is given by highly sensitive photodetectors reading out each WSF and the energy information is provided by PMTs that cover the full detector area. This concept has the advantage of using N+N instead of N × N photodetectors to cover an identical imaging area, and is being applied using for the first time SiPMs. Previous studies carried out with 57Co have proved the feasibility of this concept using SiPM readout. In this work, we present experimental results from true 2D image acquisitions with a 10+10 SiPMs prototype, i.e. 10 × 10 mm2, using a parallel-hole collimator and different samples filled with 99mTc solution. The performance of the small prototype in these conditions is evaluated through the characterization of different gamma camera parameters, such as energy and spatial resolution. Ongoing advances towards a larger prototype of 100+100 SiPMs (10 × 10 cm2) are also presented.

  3. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  4. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  5. International Space Station Instmments Collect Imagery of Natural Disasters

    NASA Technical Reports Server (NTRS)

    Evans, C. A.; Stefanov, W. L.

    2013-01-01

    A new focus for utilization of the International Space Station (ISS) is conducting basic and applied research that directly benefits Earth's citizenry. In the Earth Sciences, one such activity is collecting remotely sensed imagery of disaster areas and making those data immediately available through the USGS Hazards Data Distribution System, especially in response to activations of the International Charter for Space and Major Disasters (known informally as the "International Disaster Charter", or IDC). The ISS, together with other NASA orbital sensor assets, responds to IDC activations following notification by the USGS. Most of the activations are due to natural hazard events, including large floods, impacts of tropical systems, major fires, and volcanic eruptions and earthquakes. Through the ISS Program Science Office, we coordinate with ISS instrument teams for image acquisition using several imaging systems. As of 1 August 2013, we have successfully contributed imagery data in support of 14 Disaster Charter Activations, including regions in both Haiti and the east coast of the US impacted by Hurricane Sandy; flooding events in Russia, Mozambique, India, Germany and western Africa; and forest fires in Algeria and Ecuador. ISS-based sensors contributing data include the Hyperspectral Imager for the Coastal Ocean (HICO), the ISERV (ISS SERVIR Environmental Research and Visualization System) Pathfinder camera mounted in the US Window Observational Research Facility (WORF), the ISS Agricultural Camera (ISSAC), formerly operating from the WORF, and high resolution handheld camera photography collected by crew members (Crew Earth Observations). When orbital parameters and operations support data collection, ISS-based imagery adds to the resources available to disaster response teams and contributes to the publicdomain record of these events for later analyses.

  6. VizieR Online Data Catalog: PHAT X. UV-IR photometry of M31 stars (Williams+, 2014)

    NASA Astrophysics Data System (ADS)

    Williams, B. F.; Lang, D.; Dalcanton, J. J.; Dolphin, A. E.; Weisz, D. R.; Bell, E. F.; Bianchi, L.; Byler, N.; Gilbert, K. M.; Girardi, L.; Gordon, K.; Gregersen, D.; Johnson, L. C.; Kalirai, J.; Lauer, T. R.; Monachesi, A.; Rosenfield, P.; Seth, A.; Skillman, E.

    2015-01-01

    The data for the Panchromatic Hubble Andromeda Treasury (PHAT) survey were obtained from 2010 July 12 to 2013 October 12 using the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC), the Wide Field Camera 3 (WFC3) IR (infrared) channel, and the WFC3 UVIS (ultraviolet-optical) channel. The observing strategy is described in detail in Dalcanton et al. (2012ApJS..200...18D). A list of the target names, observing dates, coordinates, orientations, instruments, exposure times, and filters is given in Table 1. Using the ACS and WFC3 cameras aboard HST, we have photometered 414 contiguous WFC3/IR footprints covering 0.5deg2 of the M31 star-forming disk. (4 data files).

  7. Plasma crystal dynamics measured with a three-dimensional plenoptic camera

    NASA Astrophysics Data System (ADS)

    Jambor, M.; Nosenko, V.; Zhdanov, S. K.; Thomas, H. M.

    2016-03-01

    Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.

  8. Plasma crystal dynamics measured with a three-dimensional plenoptic camera.

    PubMed

    Jambor, M; Nosenko, V; Zhdanov, S K; Thomas, H M

    2016-03-01

    Three-dimensional (3D) imaging of a single-layer plasma crystal was performed using a commercial plenoptic camera. To enhance the out-of-plane oscillations of particles in the crystal, the mode-coupling instability (MCI) was triggered in it by lowering the discharge power below a threshold. 3D coordinates of all particles in the crystal were extracted from the recorded videos. All three fundamental wave modes of the plasma crystal were calculated from these data. In the out-of-plane spectrum, only the MCI-induced hot spots (corresponding to the unstable hybrid mode) were resolved. The results are in agreement with theory and show that plenoptic cameras can be used to measure the 3D dynamics of plasma crystals.

  9. Investigation of solar active regions at high resolution by balloon flights of the solar optical universal polarimeter, extended definition phase

    NASA Technical Reports Server (NTRS)

    Tarbell, Theodore D.

    1993-01-01

    Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.

  10. Photogrammetry and ballistic analysis of a high-flying projectile in the STS-124 space shuttle launch

    NASA Astrophysics Data System (ADS)

    Metzger, Philip T.; Lane, John E.; Carilli, Robert A.; Long, Jason M.; Shawn, Kathy L.

    2010-07-01

    A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.

  11. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  12. SU-F-T-235: Optical Scan Based Collision Avoidance Using Multiple Stereotactic Cameras During Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R; Dobelbower, M

    Purpose: To demonstrate the ability to quickly generate an accurate collision avoidance map using multiple stereotactic cameras during simulation. Methods: Three Kinect stereotactic cameras were placed in the CT simulation room and optically calibrated to the DICOM isocenter. Immediately before scanning, the patient was optically imaged to generate a 3D polygon mesh, which was used to calculate the collision avoidance area using our previously developed framework. The mesh was visually compared to the CT scan body contour to ensure accurate coordinate alignment. To test the accuracy of the collision calculation, the patient and machine were physically maneuvered in the treatmentmore » room to calculated collision boundaries. Results: The optical scan and collision calculation took 38.0 seconds and 2.5 seconds to complete respectively. The collision prediction accuracy was determined using a receiver operating curve (ROC) analysis, where the true positive, true negative, false positive and false negative values were 837, 821, 43, and 79 points respectively. The ROC accuracy was 93.1% over the sampled collision space. Conclusion: We have demonstrated a framework which is fast and accurate for predicting collision avoidance for treatment which can be determined during the normal simulation process. Because of the speed, the system could be used to add a layer of safety with a negligible impact on the normal patient simulation experience. This information could be used during treatment planning to explore the feasible geometries when optimizing plans. Research supported by Varian Medical Systems.« less

  13. Constraint-based semi-autonomy for unmanned ground vehicles using local sensing

    NASA Astrophysics Data System (ADS)

    Anderson, Sterling J.; Karumanchi, Sisir B.; Johnson, Bryan; Perlin, Victor; Rohde, Mitchell; Iagnemma, Karl

    2012-06-01

    Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator performance by seamlessly coordinating human and controller commands is presented. This method uses onboard LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment, and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the presence of "routine" operator error, loss of operator attention, and complete loss of communications.

  14. Earth Observation taken during the 41G mission

    NASA Image and Video Library

    2009-06-25

    41G-120-056 (October 1984) --- Parts of Israel, Lebanon, Palestine, Syria and Jordan and part of the Mediterranean Sea are seen in this nearly-vertical, large format camera's view from the Earth-orbiting Space Shuttle Challenger. The Sea of Galilee is at center frame and the Dead Sea at bottom center. The frame's center coordinates are 32.5 degrees north latitude and 35.5 degrees east longitude. A Linhof camera, using 4" x 5" film, was used to expose the frame through one of the windows on Challenger's aft flight deck.

  15. Wing Twist Measurements at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W.; Wahls, Richard A.; Goad, William K.

    1996-01-01

    A technique for measuring wing twist currently in use at the National Transonic Facility is described. The technique is based upon a single camera photogrammetric determination of two dimensional coordinates with a fixed (and known) third dimensional coordinate. The wing twist is found from a conformal transformation between wind-on and wind-off 2-D coordinates in the plane of rotation. The advantages and limitations of the technique as well as the rationale for selection of this particular technique are discussed. Examples are presented to illustrate run-to-run and test-to-test repeatability of the technique in air mode. Examples of wing twist in cryogenic nitrogen mode are also presented.

  16. A novel verification method using a plastic scintillator imagining system for assessment of gantry sag in radiotherapy.

    PubMed

    Tsuneda, Masato; Nishio, Teiji; Saito, Akito; Tanaka, Sodai; Suzuki, Tatsuhiko; Kawahara, Daisuke; Matsushita, Keiichiro; Nishio, Aya; Ozawa, Shuichi; Karasawa, Kumiko; Nagata, Yasushi

    2018-06-01

    High accuracy of the beam-irradiated position is required for high-precision radiation therapy such as stereotactic body radiation therapy (SBRT), volumetric modulated arc therapy (VMAT), and intensity modulated radiation therapy (IMRT). Users generally perform the verification of the mechanical and radiation isocenters using the star shot test and the Winston Lutz test that allow evaluation of the displacement at the isocenter. However, these methods are unable to evaluate directly and quantitatively the sagging angle that is caused by the weight of the gantry itself along the gantry rotation axis. In addition, the verification of the central axis of the irradiated beam that is not dependent at the isocenter is needed for the mechanical quality assurance of a nonisocentric irradiation technique. In this study, we have developed a prototype system for the verification of three-dimensional (3D) beam alignment and we have verified the system concept for 3D isocentricity. Our system allows detection of the central axis in 3D coordinates and evaluation of the irradiated oblique angle to the gantry rotation axis, i.e., the sagging angle. In order to measure the central axis of the irradiated beam in 3D coordinates, we constructed the prototype verification system consisting of a column-shaped plastic scintillator (CoPS), a truncated cone-shaped mirror (TCsM), and a cooled charged-coupled device (CCD) camera. This verification system was irradiated with 6-MV photon beams and the scintillation light was measured using the CCD camera. The central axis on the axial plane (two-dimensional (2D) central axis) was acquired from the integration of the scintillation light along the major axis of the CoPS, and the central axis in 3D coordinates (3D central axis) was acquired from two curve-shaped profiles which were reflected by the TCsM. We verified the calculation accuracy of the gantry rotation axis, θ z . Additionally, we calculated the 3D central axis and the sagging angle at each gantry angle. We acquired the measurement images composed of the 2D central axis and the two curve-shaped profiles. The relationship between the irradiated and measured angles with respect to the gantry rotation axis had good linearity. The mean and standard deviation of the difference between the irradiated and measured angles were 0.012 and 0.078 degrees, respectively. The size of the 2D and 3D radiation isocenters were 0.470 and 0.652 mm on the axial plane and in 3D coordinates, respectively. The sagging angles were -0.31, 0.39, and 0.38 degrees at the gantry angles of 0, 180, and 180E degrees, respectively. We developed a novel verification system, designated as the "kompeito shot test system," to verify the 3D beam alignment. This system concept works for both verification of the 3D isocentricity and the direct evaluation of the sagging angle. Next, we want to improve the aspects of this system, such as the shape and the type of scintillator, to increase the system accuracy and nonisocentric beam alignment performance. © 2018 American Association of Physicists in Medicine.

  17. An overview of the stereo correlation and triangulation formulations used in DICe.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Daniel Z.

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  18. Tsunami: India

    Atmospheric Science Data Center

    2013-04-16

    ... Animation At 00:58:53 UTC (Coordinated Universal Time) on December 26, 2004, a magnitude 9.0 earthquake occurred off the west ... UTC, when the tide gauge indicated the arrival of another series of waves. Because MISR's nine cameras imaged the coast over a time span ...

  19. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  20. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  1. BrachyView: Combining LDR seed positions with transrectal ultrasound imaging in a prostate gel phantom.

    PubMed

    Alnaghy, S; Cutajar, D L; Bucci, J A; Enari, K; Safavi-Naeini, M; Favoino, M; Tartaglia, M; Carriero, F; Jakubek, J; Pospisil, S; Lerch, M; Rosenfeld, A B; Petasecca, M

    2017-02-01

    BrachyView is a novel in-body imaging system which aims to provide LDR brachytherapy seeds position reconstruction within the prostate in real-time. The first prototype is presented in this study: the probe consists of a gamma camera featuring three single cone pinhole collimators embedded in a tungsten tube, above three, high resolution pixelated detectors (Timepix). The prostate was imaged with a TRUS system using a sagittal crystal with a 2.5mm slice thickness. Eleven needles containing a total of thirty 0.508U 125 I seeds were implanted under ultrasound guidance. A CT scan was used to localise the seed positions, as well as provide a reference when performing the image co-registration between the BrachyView coordinate system and the TRUS coordinate system. An in-house visualisation software interface was developed to provide a quantitative 3D reconstructed prostate based on the TRUS images and co-registered with the LDR seeds in situ. A rigid body image registration was performed between the BrachyView and TRUS systems, with the BrachyView and CT-derived source locations compared. The reconstructed seed positions determined by the BrachyView probe showed a maximum discrepancy of 1.78mm, with 75% of the seeds reconstructed within 1mm of their nominal location. An accurate co-registration between the BrachyView and TRUS coordinate system was established. The BrachyView system has shown its ability to reconstruct all implanted LDR seeds within a tissue equivalent prostate gel phantom, providing both anatomical and seed position information in a single interface. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  2. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  3. System for measuring the coordinates of tire surfaces in transient conditions when rolling over obstacles: description of the system and performance analysis.

    PubMed

    Castellini, Paolo; Di Giuseppe, Andrea

    2008-06-01

    This paper describes the development of a system for measuring surface coordinates (commonly known as "shape measurements") which is able to give the temporal evolution of the position of the tire sidewall in transient conditions (such as during braking, when there are potholes or when the road surface is uneven) which may or may not be reproducible. The system is based on the well-known technique of projecting and observing structured light using a digital camera with an optical axis which is slanted with respect to the axis of the projector. The transient nature of the phenomenon has led to the development of specific innovative solutions as regards image processing algorithms. This paper briefly describes the components which make up the measuring system and presents the results of the measurements carried out on the drum bench. It then analyses the performance of the measuring system and the sources of uncertainty which led to the development of the system for a specific dynamic application: impact with an obstacle (cleat test). The measuring system guaranteed a measurement uncertainty of 0.28 mm along the Z axis (the axial direction of the tire) with a measurement range of 250(X) x 80(Y) x 25(Z) mm(3), with the tire rolling at a speed of up to 30 km/h.

  4. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts, Vol. 3, EPSC2008-A-00319, 2008 European Planetary Science Congress, Author(s) 2008 However, one of the issues with respect to nocturnal observing stations is the high number of false detections as a consequence of several factors: higher activity of birds and insects, reflection of sunlight on planes and helicopters, etc. Sometimes some of these false events follow a pattern which is very similar to fireball trails, which makes absolutely necessary the use of a second station in order to discriminate between them. Other key issue is related to the passage of the Sun before the field of view of some of the cameras. In fact, special care is necessary with this to avoid any damage to the CCD sensor. Besides, depending on atmospheric conditions (dust or moisture, for instance), the Sun may saturate most of the video frame. To solve this, our automated system determines which camera is pointing towards the Sun at a given moment and disconnects it. As the cameras are endowed with autoiris lenses, its disconnection means that the optics is fully closed and, so, the CCD sensor is protected. This, of course, means that when this happens the atmospheric volume covered by the corresponding camera is not monitored. It must be also taken into account that, in general, operation temperatures are higher for diurnal cameras. This results in higher thermal noise and, so, poses some difficulties to the detection software. To minimize this effect, it is necessary to employ CCD video cameras with proper signal to noise ratio. Refrigeration of the CCD sensor with, for instance, a Peltier system, can also be considered. The astrometric reduction procedure is also somewhat different for daytime events: it requires that reference objects are located within the field of view of every camera in order to calibrate the corresponding images. This is done by allowing every camera to capture distant buildings that, by means of said calibration, would allow us to obtain the equatorial coordinates of the fireball along its path by measuring its corresponding X and Y positions on every video frame. Such calibration can be performed from stars positions measured from nocturnal images taken with the same cameras. Once made, if the cameras are not moved it is possible to estimate the equatorial coordinates of any future fireball event. We don't use any software for automatic astrometry of the images. This crucial step is made via direct measurements of the pixel position as in all our previous work. Then, from these astrometric measurements, our software estimates the atmospheric trajectory and radiant for each fireball ([10] to [13]). During 2007 and 2008 the SPMN has also setup other diurnal stations based on 1/3' progressive-scan CMOS sensors attached to modified wide-field lenses covering a 120x80 degrees FOV. They are placed in Andalusia: El Arenosillo (Huelva), La Mayora (Málaga) and Murtas (Granada). They have also night sensitivity thanks to a infrared cut filter (ICR) which enables the camera to perform well in both high and low light condition in colour as well as provide IR sensitive Black/White video at night. Conclusions First detections of daylight fireballs by CCD video camera are being achieved in the SPMN framework. Future expansion and set up of new observing stations is currently being planned. The future establishment of additional diurnal SPMN stations will allow an increase in the number of daytime fireballs detected. This will also increase our chance of meteorite recovery.

  5. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  6. The Magsat three axis arc second precision attitude transfer system

    NASA Technical Reports Server (NTRS)

    Schenkel, F. W.; Heins, R. J.

    1981-01-01

    The Magsat Attitude Transfer System (ATS), which provides attitude alteration in pitch, yaw, and roll is described. A remote vector magnetometer extends from Magsat on a 20 ft boom, requiring vector orientation by reference to coordinate axes determined by a set of star mapping cameras. The ATS was designed to perform in a solar illuminated environment by using an optically narrow bandwidth with synchronous demodulation at 9300 A. The pitch/yaw optical design, the electrooptics, and signal and switching diagrams are provided. Simple mirrors with no moving parts are placed on the magnetometer to reflect a collimated beam from the ATS for attitude indication, which is accurate to one part in 96. Alignment was completed within 24 hr after launch.

  7. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    PubMed

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  8. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  9. 3D reconstruction of hollow parts analyzing images acquired by a fiberscope

    NASA Astrophysics Data System (ADS)

    Icasio-Hernández, Octavio; Gonzalez-Barbosa, José-Joel; Hurtado-Ramos, Juan B.; Viliesid-Alonso, Miguel

    2014-07-01

    A modified fiberscope used to reconstruct difficult-to-reach inner structures is presented. By substituting the fiberscope’s original illumination system, we can project a profile-revealing light line inside the object of study. The light line is obtained using a sandwiched power light-emitting diode (LED) attached to an extension arm on the tip of the fiberscope. Profile images from the interior of the object are then captured by a camera attached to the fiberscope’s eyepiece. Using a series of those images at different positions, the system is capable of generating a 3D reconstruction of the object with submillimeter accuracy. Also proposed is the use of a combination of known filters to remove the honeycomb structures produced by the fiberscope and the use of ring gages to obtain the extrinsic parameters of the camera attached to the fiberscope and the metrological traceability of the system. Several standard ring diameter measurements were compared against their certified values to improve the accuracy of the system. To exemplify an application, a 3D reconstruction of the interior of a refrigerator duct was conducted. This reconstruction includes accuracy assessment by comparing the measurements of the system to a coordinate measuring machine. The system, as described, is capable of 3D reconstruction of the interior of objects with uniform and non-uniform profiles from 10 to 60 mm in transversal dimensions and a depth of 1000 mm if the material of the walls of the object is translucent and allows the detection of the power LED light from the exterior through the wall. If this is not possible, we propose the use of a magnetic scale which reduces the working depth to 170 mm. The assessed accuracy is around ±0.15 mm in 2D cross-section reconstructions and ±1.3 mm in 1D position using a magnetic scale, and ±0.5 mm using a CCD camera.

  10. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  11. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  12. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  13. Telescope pointing for GOPEX

    NASA Technical Reports Server (NTRS)

    Owen, W. M., Jr.

    1993-01-01

    In order for photons emitted by the GOPEX lasers to be detected by Galileo's camera, the telescopes at Table Mountain Observatory and Starfire Optical Range had to be pointed in the right direction within a tolerance less than the beam divergence. At both sites nearby stars were used as pointing references. The technical challenge was to ensure that the transmission direction and the star positions were specified in exactly the same coordinate system; given this assurance, neither the uncertainty in the star catalog positions nor the difficulty in offset pointing was expected to exceed the pointing error budget. The correctness of the pointing scheme was verified by the success of GOPEX.

  14. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  15. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  17. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization.

    PubMed

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2016-06-01

    Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.

  18. Combination of optically measured coordinates and displacements for quantitative investigation of complex objects

    NASA Astrophysics Data System (ADS)

    Andrae, Peter; Beeck, Manfred-Andreas; Jueptner, Werner P. O.; Nadeborn, Werner; Osten, Wolfgang

    1996-09-01

    Holographic interferometry makes it possible to measure high precision displacement data in the range of the wavelength of the used laser light. However, the determination of 3D- displacement vectors of objects with complex surfaces requires the measurement of 3D-object coordinates not only to consider local sensitivities but to distinguish between in-plane deformation, i.e. strains, and out-of-plane components, i.e. shears, too. To this purpose both the surface displacement and coordinates have to be combined and it is advantageous to make the data available for CAE- systems. The object surface has to be approximated analytically from the measured point cloud to generate a surface mesh. The displacement vectors can be assigned to the nodes of this surface mesh for visualization of the deformation of the object under test. They also can be compared to the results of FEM-calculations or can be used as boundary conditions for further numerical investigations. Here the 3D-object coordinates are measured in a separate topometric set-up using a modified fringe projection technique to acquire absolute phase values and a sophisticated geometrical model to map these phase data onto coordinates precisely. The determination of 3D-displacement vectors requires the measurement of several interference phase distributions for at least three independent sensitivity directions depending on the observation and illumination directions as well as the 3D-position of each measuring point. These geometric quantities have to be transformed into a reference coordinate system of the interferometric set-up in order to calculate the geometric matrix. The necessary transformation can be realized by means of a detection of object features in both data sets and a subsequent determination of the external camera orientation. This paper presents a consistent solution for the measurement and combination of shape and displacement data including their transformation into simulation systems. The described procedure will be demonstrated on an automotive component. Thus more accurate and effective measurement techniques make it possible to bring experimental and numerical displacement analysis closer.

  19. Initial Performance of the Aspect System on the Chandra Observatory: Post-Facto Aspect Reconstruction

    NASA Technical Reports Server (NTRS)

    Aldcroft, T.; Karovska, M.; Cresitello-Dittmar, M.; Cameron, R.

    2000-01-01

    The aspect system of the Chandra Observatory plays a key role in realizing the full potential of Chandra's x-ray optics and detectors. To achieve the highest spatial and spectral resolution (for grating observations), an accurate post-facto time history of the spacecraft attitude and internal alignment is needed. The CXC has developed a suite of tools which process sensor data from the aspect camera assembly and gyroscopes, and produce the spacecraft aspect solution. In this poster, the design of the aspect pipeline software is briefly described, followed by details of aspect system performance during the first eight months of flight. The two key metrics of aspect performance are: image reconstruction accuracy, which measures the x-ray image blurring introduced by aspect; and celestial location, which is the accuracy of detected source positions in absolute sky coordinates.

  20. Effects of rubber shock absorber on the flywheel micro vibration in the satellite imaging system

    NASA Astrophysics Data System (ADS)

    Deng, Changcheng; Mu, Deqiang; Jia, Xuezhi; Li, Zongxuan

    2016-12-01

    When a satellite is in orbit, its flywheel will generate micro vibration and affect the imaging quality of the camera. In order to reduce this effect, a rubber shock absorber is used, and a numerical model and an experimental setup are developed to investigate its effect on the micro vibration in the study. An integrated model is developed for the system, and a ray tracing method is used in the modeling. The spot coordinates and displacements of the image plane are obtained, and the modulate transfer function (MTF) of the system is calculated. A satellite including a rubber shock absorber is designed, and the experiments are carried out. Both simulation and experiments results show that the MTF increases almost 10 %, suggesting the rubber shock absorber is useful to decrease the flywheel vibration.

  1. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, J.

    2015-12-01

    Over the last five years, Structure-from-Motion photogrammetry has dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales where relaxed logistics permit the use of dense ground control and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  2. Catchment-Scale Terrain Modelling with Structure-from-Motion Photogrammetry: a replacement for airborne lidar?

    NASA Astrophysics Data System (ADS)

    Brasington, James; James, Joe; Cook, Simon; Cox, Simon; Lotsari, Eliisa; McColl, Sam; Lehane, Niall; Williams, Richard; Vericat, Damia

    2016-04-01

    In recent years, 3D terrain reconstructions based on Structure-from-Motion photogrammetry have dramatically democratized the availability of high quality topographic data. This approach involves the use of a non-linear bundle adjustment to estimate simultaneously camera position, pose, distortion and 3D model coordinates. In contrast to traditional aerial photogrammetry, the bundle adjustment is typically solved without external constraints and instead ground control is used a posteriori to transform the modelled coordinates to an established datum using a similarity transformation. The limited data requirements, coupled with the ability to self-calibrate compact cameras, has led to a burgeoning of applications using low-cost imagery acquired terrestrially or from low-altitude platforms. To date, most applications have focused on relatively small spatial scales (0.1-5 Ha), where relaxed logistics permit the use of dense ground control networks and high resolution, close-range photography. It is less clear whether this low-cost approach can be successfully upscaled to tackle larger, watershed-scale projects extending over 102-3 km2 where it could offer a competitive alternative to established landscape modelling with airborne lidar. At such scales, compromises over the density of ground control, the speed and height of sensor platform and related image properties are inevitable. In this presentation we provide a systematic assessment of the quality of large-scale SfM terrain products derived for over 80 km2 of the braided Dart River and its catchment in the Southern Alps of NZ. Reference data in the form of airborne and terrestrial lidar are used to quantify the quality of 3D reconstructions derived from helicopter photography and used to establish baseline uncertainty models for geomorphic change detection. Results indicate that camera network design is a key determinant of model quality, and that standard aerial photogrammetric networks based on strips of nadir photography can lead to unstable camera calibration and systematic errors that are difficult to model with sparse ground control. We demonstrate how a low cost multi-camera platform providing both nadir and oblique imagery can support robust camera calibration, enabling the generation of high quality, large-scale terrain products that are suitable for precision fluvial change detection.

  3. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  4. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  5. Generation of high-dynamic range image from digital photo

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han

    2016-10-01

    A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.

  6. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  7. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  8. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  9. Selection of optical model of stereophotography experiment for determination the cloud base height as a problem of testing of statistical hypotheses

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2017-10-01

    Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.

  10. Tight coordination of aerial flight maneuvers and sonar call production in insectivorous bats

    PubMed Central

    Falk, Benjamin; Kasnadi, Joseph; Moss, Cynthia F.

    2015-01-01

    ABSTRACT Echolocating bats face the challenge of coordinating flight kinematics with the production of echolocation signals used to guide navigation. Previous studies of bat flight have focused on kinematics of fruit and nectar-feeding bats, often in wind tunnels with limited maneuvering, and without analysis of echolocation behavior. In this study, we engaged insectivorous big brown bats in a task requiring simultaneous turning and climbing flight, and used synchronized high-speed motion-tracking cameras and audio recordings to quantify the animals' coordination of wing kinematics and echolocation. Bats varied flight speed, turn rate, climb rate and wingbeat rate as they navigated around obstacles, and they adapted their sonar signals in patterning, duration and frequency in relation to the timing of flight maneuvers. We found that bats timed the emission of sonar calls with the upstroke phase of the wingbeat cycle in straight flight, and that this relationship changed when bats turned to navigate obstacles. We also characterized the unsteadiness of climbing and turning flight, as well as the relationship between speed and kinematic parameters. Adaptations in the bats' echolocation call frequency suggest changes in beam width and sonar field of view in relation to obstacles and flight behavior. By characterizing flight and sonar behaviors in an insectivorous bat species, we find evidence of exquisitely tight coordination of sensory and motor systems for obstacle navigation and insect capture. PMID:26582935

  11. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  12. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  13. A new towed platform for the unobtrusive surveying of benthic habitats and organisms

    USGS Publications Warehouse

    Zawada, David G.; Thompson, P.R.; Butcher, J.

    2008-01-01

    Maps of coral ecosystems are needed to support many conservation and management objectives, as well as research activities. Examples include ground-truthing aerial and satellite imagery, characterizing essential habitat, assessing changes, and monitoring the progress of restoration efforts. To address some of these needs, the U.S. Geological Survey developed the Along-Track Reef-Imaging System (ATRIS), a boat-based sensor package for mapping shallow-water benthic environments. ATRIS consists of a digital still camera, a video camera, and an acoustic depth sounder affixed to a moveable pole. This design, however, restricts its deployment to clear waters less than 10 m deep. To overcome this limitation, a towed version has been developed, referred to as Deep ATRIS. The system is based on a light-weight, computer-controlled, towed vehicle that is capable of following a programmed diving profile. The vehicle is 1.3 m long with a 63-cm wing span and can carry a wide variety of research instruments, including CTDs, fluorometers, transmissometers, and cameras. Deep ATRIS is currently equipped with a high-speed (20 frames · s-1) digital camera, custom-built light-emitting-diode lights, a compass, a 3-axis orientation sensor, and a nadir-looking altimeter. The vehicle dynamically adjusts its altitude to maintain a fixed height above the seafloor. The camera has a 29° x 22° field-of-view and captures color images that are 1360 x 1024 pixels in size. GPS coordinates are recorded for each image. A gigabit ethernet connection enables the images to be displayed and archived in real time on the surface computer. Deep ATRIS has a maximum tow speed of 2.6 m · s-1and a theoretical operating tow-depth limit of 27 m. With an improved tow cable, the operating depth can be extended to 90 m. Here, we present results from the initial sea trials in the Gulf of Mexico and Biscayne National Park, Florida, USA, and discuss the utility of Deep ATRIS for map-ping coral reef habitats. Several example mosaics illustrate the high-quality imagery that can be obtained with this system. The images also reveal the potential for unobtrusive animal observations; fish and sea turtles are unperturbed by the presence of Deep ATRIS

  14. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  15. Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment

    NASA Astrophysics Data System (ADS)

    Chapman, Michael A.; Min, Cao; Zhang, Deijin

    2016-06-01

    The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.

  16. Role of quality of service metrics in visual target acquisition and tracking in resource constrained environments

    NASA Astrophysics Data System (ADS)

    Anderson, Monica; David, Phillip

    2007-04-01

    Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.

  17. A Support System for Mouse Operations Using Eye-Gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.

  18. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  19. Three-dimensional shape measurement and calibration for fringe projection by considering unequal height of the projector and the camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu Feipeng; Shi Hongjian; Bai Pengxiang

    In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less

  20. Laser guide star pointing camera for ESO LGS Facilities

    NASA Astrophysics Data System (ADS)

    Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.

    2014-08-01

    Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.

  1. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  2. Angle of sky light polarization derived from digital images of the sky under various conditions.

    PubMed

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Yang, Yi; Ning, Yu

    2017-01-20

    Skylight polarization is used for navigation by some birds and insects. Skylight polarization also has potential for human navigation applications. Its advantages include relative immunity from interference and the absence of error accumulation over time. However, there are presently few examples of practical applications for polarization navigation technology. The main reason is its weak robustness during cloudy weather conditions. In this paper, the real-time measurement of the sky light polarization pattern across the sky has been achieved with a wide field of view camera. The images were processed under a new reference coordinate system to clearly display the symmetrical distribution of angle of polarization with respect to the solar meridian. A new algorithm for the extraction of the image axis of symmetry is proposed, in which the real-time azimuth angle between the camera and the solar meridian is accurately calculated. Our experimental results under different weather conditions show that polarization navigation has high accuracy, is strongly robust, and performs well during fog and haze, clouds, and strong sunlight.

  3. Measurement of material mechanical properties in microforming

    NASA Astrophysics Data System (ADS)

    Yun, Wang; Xu, Zhenying; Hui, Huang; Zhou, Jianzhong

    2006-02-01

    As the rapid market need of micro-electro-mechanical systems engineering gives it the wide development and application ranging from mobile phones to medical apparatus, the need of metal micro-parts is increasing gradually. Microforming technology challenges the plastic processing technology. The findings have shown that if the grain size of the specimen remains constant, the flow stress changes with the increasing miniaturization, and also the necking elongation and the uniform elongation etc. It is impossible to get the specimen material properties in conventional tensile test machine, especially in the high precision demand. Therefore, one new measurement method for getting the specimen material-mechanical property with high precision is initiated. With this method, coupled with the high speed of Charge Coupled Device (CCD) camera and high precision of Coordinate Measuring Machine (CMM), the elongation and tensile strain in the gauge length are obtained. The elongation, yield stress and other mechanical properties can be calculated from the relationship between the images and CCD camera movement. This measuring method can be extended into other experiments, such as the alignment of the tool and specimen, micro-drawing process.

  4. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  5. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Earth's Radiation Belts: The View from Juno's Cameras

    NASA Astrophysics Data System (ADS)

    Becker, H. N.; Joergensen, J. L.; Hansen, C. J.; Caplinger, M. A.; Ravine, M. A.; Gladstone, R.; Versteeg, M. H.; Mauk, B.; Paranicas, C.; Haggerty, D. K.; Thorne, R. M.; Connerney, J. E.; Kang, S. S.

    2013-12-01

    Juno's cameras, particle instruments, and ultraviolet imaging spectrograph have been heavily shielded for operation within Jupiter's high radiation environment. However, varying quantities of >1-MeV electrons and >10-MeV protons will be energetic enough to penetrate instrument shielding and be detected as transient background signatures by the instruments. The differing shielding profiles of Juno's instruments lead to differing spectral sensitivities to penetrating electrons and protons within these regimes. This presentation will discuss radiation data collected by Juno in the Earth's magnetosphere during Juno's October 9, 2013 Earth flyby (559 km altitude at closest approach). The focus will be data from Juno's Stellar Reference Unit, Advanced Stellar Compass star cameras, and JunoCam imager acquired during coordinated proton measurements within the inner zone and during the spacecraft's inbound and outbound passages through the outer zone (L ~3-5). The background radiation signatures from these cameras will be correlated with dark count background data collected at these geometries by Juno's Ultraviolet Spectrograph (UVS) and Jupiter Energetic Particle Detector Instrument (JEDI). Further comparison will be made to Van Allen Probe data to calibrate Juno's camera results and contribute an additional view of the Earth's radiation environment during this unique event.

  7. A near-Infrared SETI Experiment: Alignment and Astrometric precision

    NASA Astrophysics Data System (ADS)

    Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan

    2016-06-01

    Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.

  8. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.

    PubMed

    Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-08-12

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.

  9. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller

    PubMed Central

    Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-01-01

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689

  10. Producing a Linear Laser System for 3d Modelimg of Small Objects

    NASA Astrophysics Data System (ADS)

    Amini, A. Sh.; Mozaffar, M. H.

    2012-07-01

    Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.

  11. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.

  12. Visual field information in Nap-of-the-Earth flight by teleoperated Helmet-Mounted displays

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kohn, S.; Merhav, S. J.

    1991-01-01

    The human ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays originates from a Forward Looking Infrared Radiation Camera, gimbal-mounted at the front of the aircraft and slaved to the pilot's line-of-sight, to obtain wide-angle visual coverage. Although these displays are proved to be effective in Apache and Cobra helicopter night operations, they demand very high pilot proficiency and work load. Experimental work presented in the paper has shown that part of the difficulties encountered in vehicular control by means of these displays can be attributed to the narrow viewing aperture and head/camera slaving system phase lags. Both these shortcomings will impair visuo-vestibular coordination, when voluntary head rotation is present. This might result in errors in estimating the Control-Oriented Visual Field Information vital in vehicular control, such as the vehicle yaw rate or the anticipated flight path, or might even lead to visuo-vestibular conflicts (motion sickness). Since, under these conditions, the pilot will tend to minimize head rotation, the full wide-angle coverage of the Helmet-Mounted Display, provided by the line-of-sight slaving system, is not always fully utilized.

  13. Simultaneous observations of traveling convection vortices: Ionosphere-thermosphere coupling

    NASA Astrophysics Data System (ADS)

    Kim, Hyomin; Lessard, Marc R.; Jones, Sarah L.; Lynch, Kristina A.; Fernandes, Philip A.; Aruliah, Anasuya L.; Engebretson, Mark J.; Moen, Jøran I.; Oksavik, Kjellmar; Yahnin, Alexander G.; Yeoman, Timothy K.

    2017-05-01

    We present simultaneous observations of magnetosphere-ionosphere-thermosphere coupling over Svalbard during a traveling convection vortex (TCV) event. Various spaceborne and ground-based instruments made coordinated measurements, including magnetometers, particle detectors, an all-sky camera, European Incoherent Scatter (EISCAT) Svalbard Radar, Super Dual Auroral Radar Network (SuperDARN), and SCANning Doppler Imager (SCANDI). The instruments recorded TCVs associated with a sudden change in solar wind dynamic pressure. The data display typical features of TCVs including vortical ionospheric convection patterns seen by the ground magnetometers and SuperDARN radars and auroral precipitation near the cusp observed by the all-sky camera. Simultaneously, electron and ion temperature enhancements with corresponding density increase from soft precipitation are also observed by the EISCAT Svalbard Radar. The ground magnetometers also detected electromagnetic ion cyclotron waves at the approximate time of the TCV arrival. This implies that they were generated by a temperature anisotropy resulting from a compression on the dayside magnetosphere. SCANDI data show a divergence in thermospheric winds during the TCVs, presumably due to thermospheric heating associated with the current closure linked to a field-aligned current system generated by the TCVs. We conclude that solar wind pressure impulse-related transient phenomena can affect even the upper atmospheric dynamics via current systems established by a magnetosphere-ionosphere-thermosphere coupling process.

  14. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  15. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  16. Geometrical Calibration of the Photo-Spectral System and Digital Maps Retrieval

    NASA Astrophysics Data System (ADS)

    Bruchkouskaya, S.; Skachkova, A.; Katkovski, L.; Martinov, A.

    2013-12-01

    Imaging systems for remote sensing of the Earth are required to demonstrate high metric accuracy of the picture which can be provided through preliminary geometrical calibration of optical systems. Being defined as a result of the geometrical calibration, parameters of internal and external orientation of the cameras are needed while solving such problems of image processing, as orthotransformation, geometrical correction, geographical coordinate fixing, scale adjustment and image registration from various channels and cameras, creation of image mosaics of filmed territories, and determination of geometrical characteristics of objects in the images. The geometrical calibration also helps to eliminate image deformations arising due to manufacturing defects and errors in installation of camera elements and photo receiving matrices as well as those resulted from lens distortions. A Photo-Spectral System (PhSS), which is intended for registering reflected radiation spectra of underlying surfaces in a wavelength range from 350 nm to 1050 nm and recording images of high spatial resolution, has been developed at the A.N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. The PhSS has undergone flight tests over the territory of Belarus onboard the Antonov AN-2 aircraft with the aim to obtain visible range images of the underlying surface. Then we performed the geometrical calibration of the PhSS and carried out the correction of images obtained during the flight tests. Furthermore, we have plotted digital maps of the terrain using the stereo pairs of images acquired from the PhSS and evaluated the accuracy of the created maps. Having obtained the calibration parameters, we apply them for correction of the images from another identical PhSS device, which is located at the Russian Orbital Segment of the International Space Station (ROS ISS), aiming to retrieve digital maps of the terrain with higher accuracy.

  17. Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.

    PubMed

    Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L

    2011-01-01

    Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.

  18. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  19. Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System

    NASA Astrophysics Data System (ADS)

    Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.

    2017-01-01

    ‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.

  20. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.

    PubMed

    Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano

    2018-01-31

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.

  1. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System

    PubMed Central

    Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano

    2018-01-01

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051

  2. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  3. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  4. The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.

    2002-12-01

    Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!

  5. VizieR Online Data Catalog: BzJK observations around radio galaxies (Galametz+, 2009)

    NASA Astrophysics Data System (ADS)

    Galametz, A.; De Breuck, C.; Vernet, J.; Stern, D.; Rettura, A.; Marmo, C.; Omont, A.; Allen, M.; Seymour, N.

    2010-02-01

    We imaged the two targets using the Bessel B-band filter of the Large Format Camera (LFC) on the Palomar 5m Hale Telescope. We imaged the radio galaxy fields using the z-band filter of Palomar/LFC. In February 2005, we observed 7C 1751+6809 for 60-min under photometric conditions. In August 2005, we observed 7C 1756+6520 for 135-min but in non-photometric conditions. The tables provide the B, z, J and Ks magnitudes and coordinates of the pBzK* galaxies (red passively evolving candidates selected by BzK=(z-K)-(B-z)<-0.2 and (z-K)>2.2) for both fields. The B and z bands were obtained using the Large Format Camera (LFC) on the Palomar 5m Hale Telescope, and the J and Ks bands using Wide-field Infrared Camera (WIRCAM) of the Canada-France-Hawaii Telescope (CFHT). (2 data files).

  6. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  7. Delivering the Lee Silverman Voice Treatment (LSVT) by Web Camera: A Feasibility Study

    ERIC Educational Resources Information Center

    Howell, Susan; Tripoliti, Elina; Pring, Tim

    2009-01-01

    Background: Speech disorders are a feature of Parkinson's disease, typically worsening as the disease progresses. The Lee Silverman Voice Treatment (LSVT) was developed to address these difficulties. It targets vocal loudness as a means of increasing vocal effort and improving coordination across the subsystems of speech. Aims: Currently LSVT is…

  8. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  9. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  10. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  11. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  12. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  13. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  14. Depth detection in interactive projection system based on one-shot black-and-white stripe pattern.

    PubMed

    Zhou, Qian; Qiao, Xiaorui; Ni, Kai; Li, Xinghui; Wang, Xiaohao

    2017-03-06

    A novel method enabling estimation of not only the screen surface as the conventional one, but the depth information from two-dimensional coordinates in an interactive projection system was proposed in this research. In this method, a one-shot black-and-white stripe pattern from a projector is projected on a screen plane, where the deformed pattern is captured by a charge-coupled device camera. An algorithm based on object/shadow simultaneous detection is proposed for fulfillment of the correspondence. The depth information of the object is then calculated using the triangulation principle. This technology provides a more direct feeling of virtual interaction in three dimensions without using auxiliary equipment or a special screen as interaction proxies. Simulation and experiments are carried out and the results verified the effectiveness of this method in depth detection.

  15. Photogrammetry Methodology Development for Gossamer Spacecraft Structures

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Jones, Thomas W.; Walford, Alan; Black, Jonathan T.; Robson, Stuart; Shortis, Mark R.

    2002-01-01

    Photogrammetry--the science of calculating 3D object coordinates from images-is a flexible and robust approach for measuring the static and dynamic characteristics of future ultralightweight and inflatable space structures (a.k.a., Gossamer structures), such as large membrane reflectors, solar sails, and thin-film solar arrays. Shape and dynamic measurements are required to validate new structural modeling techniques and corresponding analytical models for these unconventional systems. This paper summarizes experiences at NASA Langley Research Center over the past three years to develop or adapt photogrammetry methods for the specific problem of measuring Gossamer space structures. Turnkey industrial photogrammetry systems were not considered a cost-effective choice for this basic research effort because of their high purchase and maintenance costs. Instead, this research uses mainly off-the-shelf digital-camera and software technologies that are affordable to most organizations and provide acceptable accuracy.

  16. Tight coordination of aerial flight maneuvers and sonar call production in insectivorous bats.

    PubMed

    Falk, Benjamin; Kasnadi, Joseph; Moss, Cynthia F

    2015-11-01

    Echolocating bats face the challenge of coordinating flight kinematics with the production of echolocation signals used to guide navigation. Previous studies of bat flight have focused on kinematics of fruit and nectar-feeding bats, often in wind tunnels with limited maneuvering, and without analysis of echolocation behavior. In this study, we engaged insectivorous big brown bats in a task requiring simultaneous turning and climbing flight, and used synchronized high-speed motion-tracking cameras and audio recordings to quantify the animals' coordination of wing kinematics and echolocation. Bats varied flight speed, turn rate, climb rate and wingbeat rate as they navigated around obstacles, and they adapted their sonar signals in patterning, duration and frequency in relation to the timing of flight maneuvers. We found that bats timed the emission of sonar calls with the upstroke phase of the wingbeat cycle in straight flight, and that this relationship changed when bats turned to navigate obstacles. We also characterized the unsteadiness of climbing and turning flight, as well as the relationship between speed and kinematic parameters. Adaptations in the bats' echolocation call frequency suggest changes in beam width and sonar field of view in relation to obstacles and flight behavior. By characterizing flight and sonar behaviors in an insectivorous bat species, we find evidence of exquisitely tight coordination of sensory and motor systems for obstacle navigation and insect capture. © 2015. Published by The Company of Biologists Ltd.

  17. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.

  18. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  19. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  20. Dual-head gamma camera system for intraoperative localization of radioactive seeds

    NASA Astrophysics Data System (ADS)

    Arsenali, B.; de Jong, H. W. A. M.; Viergever, M. A.; Dickerscheid, D. B. M.; Beijst, C.; Gilhuijs, K. G. A.

    2015-10-01

    Breast-conserving surgery is a standard option for the treatment of patients with early-stage breast cancer. This form of surgery may result in incomplete excision of the tumor. Iodine-125 labeled titanium seeds are currently used in clinical practice to reduce the number of incomplete excisions. It seems likely that the number of incomplete excisions can be reduced even further if intraoperative information about the location of the radioactive seed is combined with preoperative information about the extent of the tumor. This can be combined if the location of the radioactive seed is established in a world coordinate system that can be linked to the (preoperative) image coordinate system. With this in mind, we propose a radioactive seed localization system which is composed of two static ceiling-suspended gamma camera heads and two parallel-hole collimators. Physical experiments and computer simulations which mimic realistic clinical situations were performed to estimate the localization accuracy (defined as trueness and precision) of the proposed system with respect to collimator-source distance (ranging between 50 cm and 100 cm) and imaging time (ranging between 1 s and 10 s). The goal of the study was to determine whether or not a trueness of 5 mm can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (these specifications were defined by a group of dedicated breast cancer surgeons). The results from the experiments indicate that the location of the radioactive seed can be established with an accuracy of 1.6 mm  ±  0.6 mm if a collimator-source distance of 50 cm and imaging time of 5 s are used (these experiments were performed with a 4.5 cm thick block phantom). Furthermore, the results from the simulations indicate that a trueness of 3.2 mm or less can be achieved if a collimator-source distance of 50 cm and imaging time of 5 s are used (this trueness was achieved for all 14 breast phantoms which were used in this study). Based on these results we conclude that the proposed system can be a valuable tool for (real-time) intraoperative breast cancer localization.

  1. Simulating the Performance of Ground-Based Optical Asteroid Surveys

    NASA Astrophysics Data System (ADS)

    Christensen, Eric J.; Shelly, Frank C.; Gibbs, Alex R.; Grauer, Albert D.; Hill, Richard E.; Johnson, Jess A.; Kowalski, Richard A.; Larson, Stephen M.

    2014-11-01

    We are developing a set of asteroid survey simulation tools in order to estimate the capability of existing and planned ground-based optical surveys, and to test a variety of possible survey cadences and strategies. The survey simulator is composed of several layers, including a model population of solar system objects and an orbital integrator, a site-specific atmospheric model (including inputs for seeing, haze and seasonal cloud cover), a model telescope (with a complete optical path to estimate throughput), a model camera (including FOV, pixel scale, and focal plane fill factor) and model source extraction and moving object detection layers with tunable detection requirements. We have also developed a flexible survey cadence planning tool to automatically generate nightly survey plans. Inputs to the cadence planner include camera properties (FOV, readout time), telescope limits (horizon, declination, hour angle, lunar and zenithal avoidance), preferred and restricted survey regions in RA/Dec, ecliptic, and Galactic coordinate systems, and recent coverage by other asteroid surveys. Simulated surveys are created for a subset of current and previous NEO surveys (LINEAR, Pan-STARRS and the three Catalina Sky Survey telescopes), and compared against the actual performance of these surveys in order to validate the model’s performance. The simulator tracks objects within the FOV of any pointing that were not discovered (e.g. too few observations, too trailed, focal plane array gaps, too fast or slow), thus dividing the population into “discoverable” and “discovered” subsets, to inform possible survey design changes. Ongoing and future work includes generating a realistic “known” subset of the model NEO population, running multiple independent simulated surveys in coordinated and uncoordinated modes, and testing various cadences to find optimal strategies for detecting NEO sub-populations. These tools can also assist in quantifying the efficiency of novel yet unverified survey cadences (e.g. the baseline LSST cadence) that sparsely spread the observations required for detection over several days or weeks.

  2. End-user acceptance of a cloud-based teledentistry system and Android phone app for remote screening for oral diseases.

    PubMed

    Estai, Mohamed; Kanagasingam, Yogesan; Xiao, Di; Vignarajan, Janardhan; Bunt, Stuart; Kruger, Estie; Tennant, Marc

    2017-01-01

    Objective This study aimed to evaluate users' acceptance of a teledentistry model utilizing a smartphone camera used for dental caries screening and to identify a number of areas for improvement of the system. Methods A store-and-forward telemedicine platform "Remote-I" was developed to assist in the screening of oral diseases using an image acquisition Android app operated by 17 teledental assistants. A total of 485 images (five images per case) were directly transmitted from the Android app to the server. A panel of five dental practitioners (graders) assessed the images and reported their diagnosis. A user acceptance survey was sent to the graders and smartphone users following completion of the screening program. Results Of the 22 surveys sent out, 20 (91%) were completed. Generally, users showed optimism towards the use of the teledentistry system, and strongly positively assessed items on content and service quality. The majority of graders took less than 15 min to read the images while phone users took 5-10 min to complete the dental photography using the Android app. This study identified a number of factors that are essential for improving the current system, such as optimization of smartphone camera features, the format of the server, and the orientation of images and using oral retractors during photography. Conclusions Users appear to be generally satisfied with the proposed teledentistry model. However, they have specific concerns to address, many of which could be resolved through more effective training, coordination between sites and upgrading the current system.

  3. Software solution for autonomous observations with H2RG detectors and SIDECAR ASICs for the RATIR camera

    NASA Astrophysics Data System (ADS)

    Klein, Christopher R.; Kubánek, Petr; Butler, Nathaniel R.; Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Bloom, Joshua S.; Farah, Alejandro; Gehrels, Neil; Georgiev, Leonid; González, J. Jesús; Lee, William H.; Lotkin, Gennadiy N.; Moseley, Samuel H.; Prochaska, J. Xavier; Ramirez-Ruiz, Enrico; Richer, Michael G.; Robinson, Frederick D.; Román-Zúñiga, Carlos; Samuel, Mathew V.; Sparr, Leroy M.; Tucker, Corey; Watson, Alan M.

    2012-07-01

    The Reionization And Transients InfraRed (RATIR) camera has been built for rapid Gamma-Ray Burst (GRB) followup and will provide quasi-simultaneous imaging in ugriZY JH. The optical component uses two 2048 × 2048 pixel Finger Lakes Imaging ProLine detectors, one optimized for the SDSS u, g, and r bands and one optimized for the SDSS i band. The infrared portion incorporates two 2048 × 2048 pixel Teledyne HgCdTe HAWAII-2RG detectors, one with a 1.7-micron cutoff and one with a 2.5-micron cutoff. The infrared detectors are controlled by Teledyne's SIDECAR (System for Image Digitization Enhancement Control And Retrieval) ASICs (Application Specific Integrated Circuits). While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 (JWST ASIC Drive Electronics) interface card and IDE (Integrated Development Environment). Here we present a summary of the software developed to interface the RATIR detectors with Remote Telescope System, 2nd Version (RTS2) software. RTS2 is an integrated open source package for remote observatory control under the Linux operating system and will autonomously coordinate observatory dome, telescope pointing, detector, filter wheel, focus stage, and dewar vacuum compressor operations. Where necessary we have developed custom interfaces between RTS2 and RATIR hardware, most notably for cryogenic focus stage motor drivers and temperature controllers. All detector and hardware interface software developed for RATIR is freely available and open source as part of the RTS2 distribution.

  4. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  5. GeoTrack: bio-inspired global video tracking by networks of unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Barooah, Prabir; Collins, Gaemus E.; Hespanha, João P.

    2009-05-01

    Research from the Institute for Collaborative Biotechnologies (ICB) at the University of California at Santa Barbara (UCSB) has identified swarming algorithms used by flocks of birds and schools of fish that enable these animals to move in tight formation and cooperatively track prey with minimal estimation errors, while relying solely on local communication between the animals. This paper describes ongoing work by UCSB, the University of Florida (UF), and the Toyon Research Corporation on the utilization of these algorithms to dramatically improve the capabilities of small unmanned aircraft systems (UAS) to cooperatively locate and track ground targets. Our goal is to construct an electronic system, called GeoTrack, through which a network of hand-launched UAS use dedicated on-board processors to perform multi-sensor data fusion. The nominal sensors employed by the system will EO/IR video cameras on the UAS. When GMTI or other wide-area sensors are available, as in a layered sensing architecture, data from the standoff sensors will also be fused into the GeoTrack system. The output of the system will be position and orientation information on stationary or mobile targets in a global geo-stationary coordinate system. The design of the GeoTrack system requires significant advances beyond the current state-of-the-art in distributed control for a swarm of UAS to accomplish autonomous coordinated tracking; target geo-location using distributed sensor fusion by a network of UAS, communicating over an unreliable channel; and unsupervised real-time image-plane video tracking in low-powered computing platforms.

  6. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  7. Radio Frequency Identification (RFID) and communication technologies for solid waste bin and truck monitoring system.

    PubMed

    Hannan, M A; Arebey, Maher; Begum, R A; Basri, Hassan

    2011-12-01

    This paper deals with a system of integration of Radio Frequency Identification (RFID) and communication technologies for solid waste bin and truck monitoring system. RFID, GPS, GPRS and GIS along with camera technologies have been integrated and developed the bin and truck intelligent monitoring system. A new kind of integrated theoretical framework, hardware architecture and interface algorithm has been introduced between the technologies for the successful implementation of the proposed system. In this system, bin and truck database have been developed such a way that the information of bin and truck ID, date and time of waste collection, bin status, amount of waste and bin and truck GPS coordinates etc. are complied and stored for monitoring and management activities. The results showed that the real-time image processing, histogram analysis, waste estimation and other bin information have been displayed in the GUI of the monitoring system. The real-time test and experimental results showed that the performance of the developed system was stable and satisfied the monitoring system with high practicability and validity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  9. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  10. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  11. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  12. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  13. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    NASA Astrophysics Data System (ADS)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  14. STS-61A earth observations

    NASA Image and Video Library

    1985-11-02

    61A-43-029 (2 Nov 1985) --- This view, photographed from the Earth orbiting Challenger, features a vertical view of the Okavango Swamp in Botswana. Center coordinates are 19.0 degrees south latitude and 22.5 degrees east longitude. The Challenger was flying at an altitude of 177 nautical miles when the photo was taken with a 70mm handheld Hasselblad camera.

  15. Microgravity

    NASA Image and Video Library

    2001-04-25

    The arnual conference for the Educator Resource Center Network (ERCN) Coordinators was held at Glenn Research Center at Lewis Field in Cleveland, Ohio. The conference included participants from NASA's Educator Resource Centers located throughout the country. The Microgravity Science Division at Glenn sponsored a Microgravity Day for all the conference participants. This image is from a digital still camera; higher resolution is not available.

  16. Evaluation of camera-based systems to reduce transit bus side collisions : phase II.

    DOT National Transportation Integrated Search

    2012-12-01

    The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...

  17. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  18. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  19. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  20. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  1. Time Lapse Camera Images for Observation and Visualization of Snow Lines on two Benchmark Glaciers in High Asia - Zhadang and Halji

    NASA Astrophysics Data System (ADS)

    Schröter, B.; Buchroithner, M. F.; Pieczonka, T.

    2015-12-01

    Glaciers are characteristic elements of high mountain environments and represent key indicators for the ongoing climate change. The covering snowpack considerably affects the glacier-ice surface temperature and thus the meltdown of the glaciers which in recent decades has been accelerating worldwide. Therefore, the detailed investigation of the transient snow is of high importance. Zhadang Glacier is located in the Nyainqentanglha Mountain Range on the central part of the Tibetan Plateau (30°28.24' N, 90°38.69' E). The glacier is debris-free and flows from 6,090 to 5,515 m a.s.l. Recent measurements have shown that the whole glacier is below the ELA and experiences significant glacier volume loss. In May 2010 two terrestrial cameras had been installed there and were continuously operating until September 2012 generating 6,225 images of the glacier with a frequency of three resp. six images per day. In order to use this dataset for snow line mapping all images had to be georeferenced and orthorectified. The biggest challenge was the problem of shifting camera positions due to deformations of the ground and hence the offset in the image coordinates. This was resolved by combining the manual orthorectification of one image per week with a subsequent spline interpolation to determine the changed image coordinates. The actual orthorectification was finally realized by applying a fully automated batch processing of all images. The most favorable image of each day was chosen for the manual snow line mapping process. The final aim was the calculation of the mean elevation of the snow line for every day of the data collecting period materialized by intersecting the mapped snow lines with resampled SRTM 3 data. Considering the fact that there were several weeks either with full snow cover or without any snow this aim could be achieved. The findings have been used for the evaluation of a glacier mass balance model developed at RWTH Aachen, Germany, showing a high level of congruence. Another result is the proof of intense ablation due to snow drift and sublimation during the winter months. In 2014 a similar camera system was installed near Halji Glacier in northwestern Nepal on the southern edge of the Tibetan Plateau (30°15.80' N, 81°28.16' E; 5,730 - 5,270 m a.s.l.; ELA = 5,660 m a.s.l.). The images are currently being processed.

  2. How well do you know The Earth, Asteroids, Comets?

    NASA Astrophysics Data System (ADS)

    Alexander, C. J.; Lopez, J.; Barkus, R. C.; Angrum, A.

    2011-12-01

    The U.S. Rosetta Project is the NASA contribution to the International Rosetta Mission, an ESA cornerstone mission. The mission will arrive at its target, comet 67 P/Churyumov-Gerasimenko, in 2014, and escort the comet around the Sun for the ensuing 17 months. Along the way, the mission has encountered two asteroids: 21/Lutetia, and 2867/Steins, and enjoyed gravity assists at Mars and several at the Earth. The challenge for outreach coordinators is to convey the excitement of the mission, the manner in which its experiments gather and interpret data, provide context for those interpretations, all in an engaging fashion, without heavy use of camera images -- the camera not being among the NASA contributed instruments. In other words, because the US Rosetta Project expects to present the data and results from: an ultraviolet spectrometer, a plasma instrument, and a microwave spectrometer, outreach is presented with the special challenges of engaging the public in data which at least visually is less accessible than that of camera images. The project has turned to online games, interactive simulations, animated cartoons, and virtual labs to provide a visually stimulating way to explain: how scientists determine the age of asteroids; the context of a timeline of Earth geologic history with which to understand the relative position of the ages of any given asteroid to some event on Earth; a model of the solar system that goes from our Sun to the next Star to understand the spatial distances covered by comets in their journey around the Sun; and a model of early solar system evolution in which to understand the possible scenarios of solar system evolution that comets can help us sort out. In this paper we will present these simulations, even if some of them remain in the beta-development phase at the time of the meeting itself. Work at the Jet Propulsion Laboratory, California Institute of Technology, was supported by NASA. Rosetta is a joint collaboration between NASA and the European Space Agency.

  3. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  4. Leaf development and demography explain photosynthetic seasonality in Amazon evergreen forests

    USGS Publications Warehouse

    Wu, Jin; Albert, Lauren; Lopes, Aline; Restrepo-Coupe, Natalia; Hayek, Matthew; Wiedemann, Kenia T.; Guan, Kaiyu; Stark, Scott C.; Christoffersen, Bradley; Prohaska, Neill; Tavares, Julia V.; Marostica, Suelen; Kobayashi, Hideki; Ferreira, Maurocio L.; Campos, Kleber Silva; da Silva, Rodrigo; Brando, Paulo M.; Dye, Dennis G.; Huxman, Travis E.; Huete, Alfredo; Nelson, Bruce; Saleska, Scott

    2016-01-01

    In evergreen tropical forests, the extent, magnitude, and controls on photosynthetic seasonality are poorly resolved and inadequately represented in Earth system models. Combining camera observations with ecosystem carbon dioxide fluxes at forests across rainfall gradients in Amazônia, we show that aggregate canopy phenology, not seasonality of climate drivers, is the primary cause of photosynthetic seasonality in these forests. Specifically, synchronization of new leaf growth with dry season litterfall shifts canopy composition toward younger, more light-use efficient leaves, explaining large seasonal increases (~27%) in ecosystem photosynthesis. Coordinated leaf development and demography thus reconcile seemingly disparate observations at different scales and indicate that accounting for leaf-level phenology is critical for accurately simulating ecosystem-scale responses to climate change.

  5. Leaf ontogeny and demography explain photosynthetic seasonality in Amazon evergreen forests

    NASA Astrophysics Data System (ADS)

    Wu, J.; Albert, L.; Lopes, A. P.; Restrepo-Coupe, N.; Hayek, M.; Wiedemann, K. T.; Guan, K.; Stark, S. C.; Prohaska, N.; Tavares, J. V.; Marostica, S. F.; Kobayashi, H.; Ferreira, M. L.; Campos, K.; Silva, R. D.; Brando, P. M.; Dye, D. G.; Huxman, T. E.; Huete, A. R.; Nelson, B. W.; Saleska, S. R.

    2015-12-01

    Photosynthetic seasonality couples the evolutionary ecology of plant leaves to large-scale rhythms of carbon and water exchanges that are important feedbacks to climate. However, the extent, magnitude, and controls on photosynthetic seasonality of carbon-rich tropical forests are poorly resolved, controversial in the remote sensing literature, and inadequately represented in most earth system models. Here we show that ecosystem-scale phenology (measured by photosynthetic capacity), rather than environmental seasonality, is the primary driver of photosynthetic seasonality at four Amazon evergreen forests spanning gradients in rainfall seasonality, forest composition, and flux seasonality. We further demonstrate that leaf ontogeny and demography explain most of this ecosystem phenology at two central Amazon evergreen forests, using a simple leaf-cohort canopy model that integrates eddy covariance-derived CO2 fluxes, novel near-surface camera-detected leaf phenology, and ground observations of litterfall and leaf physiology. The coordination of new leaf growth and old leaf divestment (litterfall) during the dry season shifts canopy composition towards younger leaves with higher photosynthetic efficiency, driving large seasonal increases (~27%) in ecosystem photosynthetic capacity. Leaf ontogeny and demography thus reconciles disparate observations of forest seasonality from leaves to eddy flux towers to satellites. Strategic incorporation of such whole-plant coordination processes as phenology and ontogeny will improve ecological, evolutionary and earth system theories describing tropical forests structure and function, allowing more accurate representation of forest dynamics and feedbacks to climate in earth system models.

  6. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  7. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  8. Video systems for real-time oil-spill detection

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.

    1973-01-01

    Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.

  9. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  10. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  11. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  12. [Microeconomics of introduction of a PET system based on the revised Japanese National Insurance reimbursement system].

    PubMed

    Abe, Katsumi; Kosuda, Shigeru; Kusano, Shoichi; Nagata, Masayoshi

    2003-11-01

    It is crucial to evaluate an annual balance before-hand when an institution installs a PET system because the revised Japanese national insurance reimbursement system set the cost of a FDG PET study as 75,000 yen. A break-even point was calculated in an 8-hour or a 24-hour operation of a PET system, based on the total costs reported. The break-even points were as follows: 13.4, 17.7, 22.1 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in an ordinary PET system operation of 8 hours. The break-even points were 19.9, 25.5, 31.2 studies per day for the 1 cyclotron-1 PET camera, 1 cyclotron-2 PET cameras, 1 cyclotron-3 PET cameras system, respectively, in a full PET system operation of 24 hours. The results indicate no profit would accrue in an ordinary PET system operation of 8 hours. The annual profit and break-even point for the total cost including the initial investment would be respectively 530 million yen and 2.8 years in a 24-hour operation with 1 cyclotron-3 PET cameras system.

  13. Positioning in Time and Space - Cost-Effective Exterior Orientation for Airborne Archaeological Photographs

    NASA Astrophysics Data System (ADS)

    Verhoeven, G.; Wieser, M.; Briese, C.; Doneus, M.

    2013-07-01

    Since manned, airborne aerial reconnaissance for archaeological purposes is often characterised by more-or-less random photographing of archaeological features on the Earth, the exact position and orientation of the camera during image acquisition becomes very important in an effective inventorying and interpretation workflow of these aerial photographs. Although the positioning is generally achieved by simultaneously logging the flight path or directly recording the camera's position with a GNSS receiver, this approach does not allow to record the necessary roll, pitch and yaw angles of the camera. The latter are essential elements for the complete exterior orientation of the camera, which allows - together with the inner orientation of the camera - to accurately define the portion of the Earth recorded in the photograph. This paper proposes a cost-effective, accurate and precise GNSS/IMU solution (image position: 2.5 m and orientation: 2°, both at 1σ) to record all essential exterior orientation parameters for the direct georeferencing of the images. After the introduction of the utilised hardware, this paper presents the developed software that allows recording and estimating these parameters. Furthermore, this direct georeferencing information can be embedded into the image's metadata. Subsequently, the first results of the estimation of the mounting calibration (i.e. the misalignment between the camera and GNSS/IMU coordinate frame) are provided. Furthermore, a comparison with a dedicated commercial photographic GNSS/IMU solution will prove the superiority of the introduced solution. Finally, an outlook on future tests and improvements finalises this article.

  14. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  15. Design of a radiation facility for very small specimens used in radiobiology studies

    NASA Astrophysics Data System (ADS)

    Rodriguez, Manuel; Jeraj, Robert

    2008-06-01

    A design of a radiation facility for very small specimens used in radiobiology is presented. This micro-irradiator has been primarily designed to irradiate partial bodies in zebrafish embryos 3-4 mm in length. A miniature x-ray, 50 kV photon beam, is used as a radiation source. The source is inserted in a cylindrical brass collimator that has a pinhole of 1.0 mm in diameter along the central axis to produce a pencil photon beam. The collimator with the source is attached underneath a computer-controlled movable table which holds the specimens. Using a 45° tilted mirror, a digital camera, connected to the computer, takes pictures of the specimen and the pinhole collimator. From the image provided by the camera, the relative distance from the specimen to the pinhole axis is calculated and coordinates are sent to the movable table to properly position the samples in the beam path. Due to its monitoring system, characteristic of the radiation beam, accuracy and precision of specimen positioning, and automatic image-based specimen recognition, this radiation facility is a suitable tool to irradiate partial bodies in zebrafish embryos, cell cultures or any other small specimen used in radiobiology research.

  16. Post-Flight Estimation of Motion of Space Structures: Part 1

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    A computer program estimates the relative positions and orientations of two space structures from data on the angular positions and distances of fiducial objects on one structure as measured by a target tracking electronic camera and laser range finders on another structure. The program is written specifically for determining the relative alignments of two antennas, connected by a long truss, deployed in outer space from a space shuttle. The program is based partly on transformations among the various coordinate systems involved in the measurements and on a nonlinear mathematical model of vibrations of the truss. The program implements a Kalman filter that blends the measurement data with data from the model. Using time series of measurement data from the tracking camera and range finders, the program generates time series of data on the relative position and orientation of the antennas. A similar program described in a prior NASA Tech Briefs article was used onboard for monitoring the structures during flight. The present program is more precise and designed for use on Earth in post-flight processing of the measurement data to enable correction, for antenna motions, of scientific data acquired by use of the antennas.

  17. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  18. Assessment of the DoD Embedded Media Program

    DTIC Science & Technology

    2004-09-01

    Classified and Sensitive Information ................... VII-22 3. Weapons Systems Video, Gun Camera Video, and Lipstick Cameras...Weapons Systems Video, Gun Camera Video, and Lipstick Cameras A SECDEF and CJCS message to commanders stated, “Put in place mechanisms and processes...of public communication activities.”126 The 10 February 2003 PAG stated, “Use of lipstick and helmet-mounted cameras on combat sorties is approved

  19. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  20. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  1. Using a motion capture system for spatial localization of EEG electrodes

    PubMed Central

    Reis, Pedro M. R.; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  2. Simultaneous observations of traveling convection vortices: Ionosphere-thermosphere coupling: M-I-T COUPLING OF TCV

    DOE PAGES

    Kim, Hyomin; Lessard, Marc R.; Jones, Sarah L.; ...

    2017-03-11

    We present simultaneous observations of magnetosphere-ionosphere-thermosphere coupling over Svalbard during a traveling convection vortex (TCV) event. Various spaceborne and ground-based instruments made coordinated measurements, including magnetometers, particle detectors, an all-sky camera, European Incoherent Scatter (EISCAT) Svalbard Radar, Super Dual Auroral Radar Network (SuperDARN), and SCANning Doppler Imager (SCANDI). The instruments recorded TCVs associated with a sudden change in solar wind dynamic pressure. The data display typical features of TCVs including vortical ionospheric convection patterns seen by the ground magnetometers and SuperDARN radars and auroral precipitation near the cusp observed by the all-sky camera. Simultaneously, electron and ion temperature enhancements withmore » corresponding density increase from soft precipitation are also observed by the EISCAT Svalbard Radar. The ground magnetometers also detected electromagnetic ion cyclotron waves at the approximate time of the TCV arrival. This implies that they were generated by a temperature anisotropy resulting from a compression on the dayside magnetosphere. SCANDI data show a divergence in thermospheric winds during the TCVs, presumably due to thermospheric heating associated with the current closure linked to a field-aligned current system generated by the TCVs. We conclude that solar wind pressure impulse-related transient phenomena can affect even the upper atmospheric dynamics via current systems established by a magnetosphere-ionosphere-thermosphere coupling process.« less

  3. Study of Lever-Arm Effect Using Embedded Photogrammetry and On-Board GPS Receiver on Uav for Metrological Mapping Purpose and Proposal of a Free Ground Measurements Calibration Procedure

    NASA Astrophysics Data System (ADS)

    Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.

    2016-03-01

    Nowadays, Unmanned Aerial Vehicle (UAV) on-board photogrammetry knows a significant growth due to the democratization of using drones in the civilian sector. Also, due to changes in regulations laws governing the rules of inclusion of a UAV in the airspace which become suitable for the development of professional activities. Fields of application of photogrammetry are diverse, for instance: architecture, geology, archaeology, mapping, industrial metrology, etc. Our research concerns the latter area. Vinci-Construction- Terrassement is a private company specialized in public earthworks that uses UAVs for metrology applications. This article deals with maximum accuracy one can achieve with a coupled camera and GPS receiver system for direct-georeferencing of Digital Surface Models (DSMs) without relying on Ground Control Points (GCPs) measurements. This article focuses specially on the lever-arm calibration part. This proposed calibration method is based on two steps: a first step involves the proper calibration for each sensor, i.e. to determine the position of the optical center of the camera and the GPS antenna phase center in a local coordinate system relative to the sensor. A second step concerns a 3d modeling of the UAV with embedded sensors through a photogrammetric acquisition. Processing this acquisition allows to determine the value of the lever-arm offset without using GCPs.

  4. Optical profilometer using laser based conical triangulation for inspection of inner geometry of corroded pipes in cylindrical coordinates

    NASA Astrophysics Data System (ADS)

    Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.

    2013-04-01

    An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.

  5. Simultaneous observations of traveling convection vortices: Ionosphere-thermosphere coupling: M-I-T COUPLING OF TCV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyomin; Lessard, Marc R.; Jones, Sarah L.

    We present simultaneous observations of magnetosphere-ionosphere-thermosphere coupling over Svalbard during a traveling convection vortex (TCV) event. Various spaceborne and ground-based instruments made coordinated measurements, including magnetometers, particle detectors, an all-sky camera, European Incoherent Scatter (EISCAT) Svalbard Radar, Super Dual Auroral Radar Network (SuperDARN), and SCANning Doppler Imager (SCANDI). The instruments recorded TCVs associated with a sudden change in solar wind dynamic pressure. The data display typical features of TCVs including vortical ionospheric convection patterns seen by the ground magnetometers and SuperDARN radars and auroral precipitation near the cusp observed by the all-sky camera. Simultaneously, electron and ion temperature enhancements withmore » corresponding density increase from soft precipitation are also observed by the EISCAT Svalbard Radar. The ground magnetometers also detected electromagnetic ion cyclotron waves at the approximate time of the TCV arrival. This implies that they were generated by a temperature anisotropy resulting from a compression on the dayside magnetosphere. SCANDI data show a divergence in thermospheric winds during the TCVs, presumably due to thermospheric heating associated with the current closure linked to a field-aligned current system generated by the TCVs. We conclude that solar wind pressure impulse-related transient phenomena can affect even the upper atmospheric dynamics via current systems established by a magnetosphere-ionosphere-thermosphere coupling process.« less

  6. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  7. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  8. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  9. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  10. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  11. Changes in kinematics and arm-leg coordination during a 100-m breaststroke swim.

    PubMed

    Oxford, Samuel W; James, Rob S; Price, Michael J; Payton, Carl J; Duncan, Michael J

    2017-08-01

    The purpose of this study was to compare arm-leg coordination and kinematics during 100 m breaststroke in 26 (8 female; 18 male) specialist breaststroke swimmers. Laps were recorded using three 50-Hz underwater cameras. Heart rate and blood lactate were measured pre- and post-swim. Arm-leg coordination was defined using coordination phases describing continuity between recovery and propulsive phases of upper and lower limbs: coordination phase 1 (time between end of leg kick and start of the arm pull phases); and coordination phase 2 (time between end of arm pull and start of leg kick phases). Duration of stroke phases, coordination phases, swim velocity, stroke length (SL), stroke rate (SR) and stroke index (SI) were analysed during the last three strokes of each lap that were unaffected by turning or finishing. Significant changes in velocity, SI and SL (P < 0.05) were found between laps. Both sexes showed significant increase (P < 0.05) in heart rate and blood lactate pre- to post-swim. Males had significantly (P < 0.01) faster swim velocities resulting from longer SLs (P = 0.016) with no difference in SR (P = 0.064). Sex differences in kinematic parameters can be explained by anthropometric differences providing males with increased propelling efficiency.

  12. Low-cost digital dynamic visualization system

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    1995-05-01

    High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.

  13. Speckle Interferometry at the Blanco and SOAR Telescopes in 2008 and 2009

    NASA Technical Reports Server (NTRS)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-01-01

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and Southern Astrophysical Research (SOAR) 4 m telescopes in Chile are presented. A tot al of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is delta m approximately 4.2 at 0.15 degree separation. These data were obtained with a new electron-multiplication CCD camera; data processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.

  14. A Space Station robot walker and its shared control software

    NASA Technical Reports Server (NTRS)

    Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji

    1994-01-01

    In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.

  15. SPECKLE INTERFEROMETRY AT THE BLANCO AND SOAR TELESCOPES IN 2008 AND 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-02-15

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and SOAR 4 m telescopes in Chile are presented. A total of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is {delta}m {approx} 4.2 at 0.''15 separation. These data were obtained with a new electron-multiplication CCD camera; datamore » processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.« less

  16. Photogrammetry Methodology Development for Gossamer Spacecraft Structures

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Jones, Thomas W.; Black, Jonathan T.; Walford, Alan; Robson, Stuart; Shortis, Mark R.

    2002-01-01

    Photogrammetry--the science of calculating 3D object coordinates from images--is a flexible and robust approach for measuring the static and dynamic characteristics of future ultra-lightweight and inflatable space structures (a.k.a., Gossamer structures), such as large membrane reflectors, solar sails, and thin-film solar arrays. Shape and dynamic measurements are required to validate new structural modeling techniques and corresponding analytical models for these unconventional systems. This paper summarizes experiences at NASA Langley Research Center over the past three years to develop or adapt photogrammetry methods for the specific problem of measuring Gossamer space structures. Turnkey industrial photogrammetry systems were not considered a cost-effective choice for this basic research effort because of their high purchase and maintenance costs. Instead, this research uses mainly off-the-shelf digital-camera and software technologies that are affordable to most organizations and provide acceptable accuracy.

  17. Spinoff 2011

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.

  18. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  19. Our solution for fusion of simultaneusly acquired whole body scintigrams and optical images, as usesful tool in clinical practice in patients with differentiated thyroid carcinomas after radioiodine therapy. A useful tool in clinical practice.

    PubMed

    Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija

    2017-01-01

    After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.

  20. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  1. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  2. Deep Space Positioning System

    NASA Technical Reports Server (NTRS)

    Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)

    2016-01-01

    A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.

  3. Precise Determination of the Orientation of the Solar Image

    NASA Astrophysics Data System (ADS)

    Győri, L.

    2010-12-01

    Accurate heliographic coordinates of objects on the Sun have to be known in several fields of solar physics. One of the factors that affect the accuracy of the measurements of the heliographic coordinates is the accuracy of the orientation of a solar image. In this paper the well-known drift method for determining the orientation of the solar image is applied to data taken with a solar telescope equipped with a CCD camera. The factors that influence the accuracy of the method are systematically discussed, and the necessary corrections are determined. These factors are as follows: the trajectory of the center of the solar disk on the CCD with the telescope drive turned off, the astronomical refraction, the change of the declination of the Sun, and the optical distortion of the telescope. The method can be used on any solar telescope that is equipped with a CCD camera and is capable of taking solar full-disk images. As an example to illustrate the method and its application, the orientation of solar images taken with the Gyula heliograph is determined. As a byproduct, a new method to determine the optical distortion of a solar telescope is proposed.

  4. Apollo 12 photography 70 mm, 16 mm, and 35 mm frame index

    NASA Technical Reports Server (NTRS)

    1970-01-01

    For each 70-mm frame, the index presents information on: (1) the focal length of the camera, (2) the photo scale at the principal point of the frame, (3) the selenographic coordinates at the principal point of the frame, (4) the percentage of forward overlap of the frame, (5) the sun angle (medium, low, high), (6) the quality of the photography, (7) the approximate tilt (minimum and maximum) of the camera, and (8) the direction of tilt. A brief description of each frame is also included. The index to the 16-mm sequence photography includes information concerning the approximate surface coverage of the photographic sequence and a brief description of the principal features shown. A column of remarks is included to indicate: (1) if the sequence is plotted on the photographic index map and (2) the quality of the photography. The pictures taken using the lunar surface closeup stereoscopic camera (35 mm) are also described in this same index format.

  5. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  6. Clinical applications of commercially available video recording and monitoring systems: inexpensive, high-quality video recording and monitoring systems for endoscopy and microsurgery.

    PubMed

    Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko

    2006-01-01

    The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.

  7. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  8. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  9. Wired and Wireless Camera Triggering with Arduino

    NASA Astrophysics Data System (ADS)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  10. Dynamic photoelasticity by TDI imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Sajan, M. R.

    2001-06-01

    High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.

  11. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  12. Levels of Autonomy and Autonomous System Performance Assessment for Intelligent Unmanned Systems

    DTIC Science & Technology

    2014-04-01

    LIDAR and camera sensors that is driven entirely by teleoperation would be AL 0. If that same robot used its LIDAR and camera data to generate a...obstacle detection, mapping, path planning 3 CMMAD semi- autonomous counter- mine system (Few 2010) Talon UGV, camera, LIDAR , metal detector...NCAP framework are performed on individual UMS components and do not require mission level evaluations. For example, bench testing of camera, LIDAR

  13. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  14. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  15. Industrial inspection of specular surfaces using a new calibration procedure

    NASA Astrophysics Data System (ADS)

    Aswendt, Petra; Hofling, Roland; Gartner, Soren

    2005-06-01

    The methodology of phase encoded reflection measurements has become a valuable tool for the industrial inspection of components with glossy surfaces. The measuring principle provides outstanding sensitivity for tiny variations of surface curvature so that sub-micron waviness and flaws are reliably detected. Quantitative curvature measurements can be obtained from a simple approach if the object is almost flat. 3D-objects with a high aspect ratio require more effort to determine both coordinates and normal direction of a surface point unambiguously. Stereoscopic solutions have been reported using more than one camera for a certain surface area. This paper will describe the combined double camera steady surface approach (DCSS) that is well suited for the implementation in industrial testing stations

  16. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  17. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  18. Integrated inertial stellar attitude sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye M. (Inventor); Kourepenis, Anthony S. (Inventor); Wyman, Jr., William F. (Inventor)

    2007-01-01

    An integrated inertial stellar attitude sensor for an aerospace vehicle includes a star camera system, a gyroscope system, a controller system for synchronously integrating an output of said star camera system and an output of said gyroscope system into a stream of data, and a flight computer responsive to said stream of data for determining from the star camera system output and the gyroscope system output the attitude of the aerospace vehicle.

  19. A new method for registration of heterogeneous sensors in a dimensional measurement system

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Zhong; Fu, Luhua; Qu, Xinghua; Zhang, Heng; Liu, Changjie

    2017-10-01

    Registration of multiple sensors is a basic step in multi-sensor dimensional or coordinate measuring systems before any measurement. In most cases, a common standard is used to be measured by all sensors, and this may work well for general registration of multiple homogeneous sensors. However, when inhomogeneous sensors detect a common standard, it is usually very difficult to obtain the same information, because of the different working principles of the sensors. In this paper, a new method called multiple steps registration is proposed to register two sensors: a video camera sensor (VCS) and a tactile probe sensor (TPS). In this method, the two sensors measure two separated standards: a chrome circle on a reticle and a reference sphere with a constant distance between them, fixed on a steel plate. The VCS captures only the circle and the TPS touches only the sphere. Both simulations and real experiments demonstrate that the proposed method is robust and accurate in the registration of multiple inhomogeneous sensors in a dimensional measurement system.

  20. Aerial vehicles collision avoidance using monocular vision

    NASA Astrophysics Data System (ADS)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  1. Implementation of a close range photogrammetric system for 3D reconstruction of a scoliotic torso

    NASA Astrophysics Data System (ADS)

    Detchev, Ivan Denislavov

    Scoliosis is a deformity of the human spine most commonly encountered with children. After being detected, periodic examinations via x-rays are traditionally used to measure its progression. However, due to the increased risk of cancer, a non-invasive and radiation-free scoliosis detection and progression monitoring methodology is needed. Quantifying the scoliotic deformity through the torso surface is a valid alternative, because of its high correlation with the internal spine curvature. This work proposes a low-cost multi-camera photogrammetric system for semi-automated 3D reconstruction of a torso surface with sub-millimetre level accuracy. The thesis describes the system design and calibration for optimal accuracy. It also covers the methodology behind the reconstruction and registration procedures. The experimental results include the complete reconstruction of a scoliotic torso mannequin. The final accuracy is evaluated through the goodness of fit between the reconstructed surface and a more accurate set of points measured by a coordinate measuring machine.

  2. The first satellite laser echoes recorded on the streak camera

    NASA Technical Reports Server (NTRS)

    Hamal, Karel; Prochazka, Ivan; Kirchner, Georg; Koidl, F.

    1993-01-01

    The application of the streak camera with the circular sweep for the satellite laser ranging is described. The Modular Streak Camera system employing the circular sweep option was integrated into the conventional Satellite Laser System. The experimental satellite tracking and ranging has been performed. The first satellite laser echo streak camera records are presented.

  3. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  4. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  5. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  6. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  7. Laser- and Multi-Spectral Monitoring of Natural Objects from UAVs

    NASA Astrophysics Data System (ADS)

    Reiterer, Alexander; Frey, Simon; Koch, Barbara; Stemmler, Simon; Weinacker, Holger; Hoffmann, Annemarie; Weiler, Markus; Hergarten, Stefan

    2016-04-01

    The paper describes the research, development and evaluation of a lightweight sensor system for UAVs. The system is composed of three main components: (1) a laser scanning module, (2) a multi-spectral camera system, and (3) a processing/storage unit. All three components are newly developed. Beside measurement precision and frequency, the low weight has been one of the challenging tasks. The current system has a total weight of about 2.5 kg and is designed as a self-contained unit (incl. storage and battery units). The main features of the system are: laser-based multi-echo 3D measurement by a wavelength of 905 nm (totally eye save), measurement range up to 200 m, measurement frequency of 40 kHz, scanning frequency of 16 Hz, relative distance accuracy of 10 mm. The system is equipped with both GNSS and IMU. Alternatively, a multi-visual-odometry system has been integrated to estimate the trajectory of the UAV by image features (based on this system a calculation of 3D-coordinates without GNSS is possible). The integrated multi-spectral camera system is based on conventional CMOS-image-chips equipped with a special sets of band-pass interference filters with a full width half maximum (FWHM) of 50 nm. Good results for calculating the normalized difference vegetation index (NDVI) and the wide dynamic range vegetation index (WDRVI) have been achieved using the band-pass interference filter-set with a FWHM of 50 nm and an exposure times between 5.000 μs and 7.000 μs. The system is currently used for monitoring of natural objects and surfaces, like forest, as well as for geo-risk analysis (landslides). By measuring 3D-geometric and multi-spectral information a reliable monitoring and interpretation of the data-set is possible. The paper gives an overview about the development steps, the system, the evaluation and first results.

  8. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  9. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  10. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  11. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  12. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  13. Depth Perception In Remote Stereoscopic Viewing Systems

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  14. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  15. Vision-based system for the control and measurement of wastewater flow rate in sewer systems.

    PubMed

    Nguyen, L S; Schaeli, B; Sage, D; Kayal, S; Jeanbourquin, D; Barry, D A; Rossi, L

    2009-01-01

    Combined sewer overflows and stormwater discharges represent an important source of contamination to the environment. However, the harsh environment inside sewers and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. In the following, we present and evaluate an in situ system for the monitoring of water flow in sewers based on video images. This paper focuses on the measurement of the water level based on image-processing techniques. The developed image-based water level algorithms identify the wall/water interface from sewer images and measure its position with respect to real world coordinates. A web-based user interface and a 3-tier system architecture enable the remote configuration of the cameras and the image-processing algorithms. Images acquired and processed by our system were found to reliably measure water levels and thereby to provide crucial information leading to better understand particular hydraulic behaviors. In terms of robustness and accuracy, the water level algorithm provided equal or better results compared to traditional water level probes in three different in situ configurations.

  16. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  17. Photogrammetry research for FAST eleven-meter reflector panel surface shape measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei; Zhu, Lichun; Li, Weimin; Hu, Jingwen; Zhai, Xuebing

    2010-10-01

    In order to design and manufacture the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) active reflector measuring equipment, measurement on each reflector panel surface shape was presented, static measurement of the whole neutral spherical network of nodes was performed, real-time dynamic measurement at the cable network dynamic deformation was undertaken. In the implementation process of the FAST, reflector panel surface shape detection was completed before eleven-meter reflector panel installation. Binocular vision system was constructed based on the method of binocular stereo vision in machine vision, eleven-meter reflector panel surface shape was measured with photogrammetry method. Cameras were calibrated with the feature points. Under the linearity camera model, the lighting spot array was used as calibration standard pattern, and the intrinsic and extrinsic parameters were acquired. The images were collected for digital image processing and analyzing with two cameras, feature points were extracted with the detection algorithm of characteristic points, and those characteristic points were matched based on epipolar constraint method. Three-dimensional reconstruction coordinates of feature points were analyzed and reflective panel surface shape structure was established by curve and surface fitting method. The error of reflector panel surface shape was calculated to realize automatic measurement on reflector panel surface shape. The results show that unit reflector panel surface inspection accuracy was 2.30mm, within the standard deviation error of 5.00mm. Compared with the requirement of reflector panel machining precision, photogrammetry has fine precision and operation feasibility on eleven-meter reflector panel surface shape measurement for FAST.

  18. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  19. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction

    PubMed Central

    Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800

  20. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

Top