Sample records for camera path reconstruction

  1. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  2. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  3. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.

  4. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  5. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  6. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  7. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    PubMed

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM

  8. Superficial vessel reconstruction with a multiview camera system

    PubMed Central

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  9. Optimization and verification of image reconstruction for a Compton camera towards application as an on-line monitor for particle therapy

    NASA Astrophysics Data System (ADS)

    Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.

    2017-07-01

    Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.

  10. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  11. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  12. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  13. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  14. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  15. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  16. Open-path, closed-path and reconstructed aerosol extinction at a rural site.

    PubMed

    Gordon, Timothy D; Prenni, Anthony J; Renfro, James R; McClure, Ethan; Hicks, Bill; Onasch, Timothy B; Freedman, Andrew; McMeeking, Gavin R; Chen, Ping

    2018-04-09

    The Handix Scientific Open-Path Cavity Ringdown Spectrometer (OPCRDS) was deployed during summer 2016 in Great Smoky Mountains National Park (GRSM). Extinction coefficients from the relatively new OPCRDS and from a more well-established extinction instrument agreed to within 7%. Aerosol hygroscopic growth (f(RH)) was calculated from the ratio of ambient extinction measured by the OPCRDS to dry extinction measured by a closed-path extinction monitor (Aerodyne's Cavity Attenuated Phase Shift Particulate Matter Extinction Monitor, CAPS PMex). Derived hygroscopicity (RH < 95%) from this campaign agreed with data from 1995 at the same site and time of year, which is noteworthy given the decreasing trend for organics and sulfate in the eastern U.S. However, maximum f(RH) values in 1995 were less than half as large as those recorded in 2016-possibly due to nephelometer truncation losses in 1995. Two hygroscopicity parameterizations were investigated using high time resolution OPCRDS+CAPS PMex data, and the K ext model was more accurate than the γ model. Data from the two ambient optical instruments, the OPCRDS and the open-path nephelometer, generally agreed; however, significant discrepancies between ambient scattering and extinction were observed, apparently driven by a combination of hygroscopic growth effects, which tend to increase nephelometer truncation losses and decrease sensitivity to the wavelength difference between the two instruments as a function of particle size. There was not a statistically significant difference in the mean reconstructed extinction values obtained from the original and the revised IMPROVE (Interagency Monitoring of Protected Visual Environments) equations. On average IMPROVE reconstructed extinction was ~25% lower than extinction measured by the OPCRDS, which suggests that the IMPROVE equations and 24-hr aerosol data are moderately successful in estimating current haze levels at GRSM. However, this conclusion is limited by the coarse

  17. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  18. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  19. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  20. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  1. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  2. Schematic diagram of light path in Wide Field Planetary Camera 2

    NASA Image and Video Library

    1993-03-15

    S93-33258 (15 Mar 1993) --- An optical schematic diagram of one of the four channels of the Wide Field\\Planetary Camera-2 (WF\\PC-2) shows the path taken by beams from the Hubble Space Telescope (HST) before an image is formed at the camera's charge-coupled devices. A team of NASA astronauts will pay a visit to the HST later this year, carrying with them the new WF/PC-2 to replace the one currently on the HST. The Jet Propulsion Laboratory in Pasadena, California has been working on the replacement system for several months. See NASA photo S93-33257 for a close-up view of tiny articulating mirrors designed to realign incoming light in order to make certain the beams fall precisely in the middle of the secondary mirrors.

  3. Minimising back reflections from the common path objective in a fundus camera

    NASA Astrophysics Data System (ADS)

    Swat, A.

    2016-11-01

    Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.

  4. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  5. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  6. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  7. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve

  8. Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods

    NASA Astrophysics Data System (ADS)

    Whyte, Refael; Streeter, Lee; Cree, Michael J.; Dorrington, Adrian A.

    2015-11-01

    Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.

  9. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  11. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  12. Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.

    PubMed

    Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias

    2016-12-01

    Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined

  13. Low Statistics Reconstruction of the Compton Camera Point Spread Function in 3D Prompt-γ Imaging of Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy

    2013-10-01

    The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.

  14. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    PubMed

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  15. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  16. Reconstruction of truncated TCT and SPECT data from a right-angle dual-camera system for myocardial SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.

    1996-12-31

    We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less

  17. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  18. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  19. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  20. Plenoptic particle image velocimetry with multiple plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2018-07-01

    Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

  1. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  2. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  3. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  4. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  5. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  6. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    NASA Astrophysics Data System (ADS)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  7. 3D Reconstruction of Static Human Body with a Digital Camera

    NASA Astrophysics Data System (ADS)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  8. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  9. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  10. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  11. A Comparison of Four-Image Reconstruction Algorithms for 3-D PET Imaging of MDAPET Camera Using Phantom Data

    NASA Astrophysics Data System (ADS)

    Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok

    2004-10-01

    We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.

  12. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  13. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  14. Cloud photogrammetry with dense stereo for fisheye cameras

    NASA Astrophysics Data System (ADS)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  15. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

  16. Reconstruction of recycling flux from synthetic camera images, evaluated for the Wendelstein 7-X startup limiter

    NASA Astrophysics Data System (ADS)

    Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team

    2017-12-01

    The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\

  17. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  18. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  19. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  20. Reconstruction of an effective magnon mean free path distribution from spin Seebeck measurements in thin films

    NASA Astrophysics Data System (ADS)

    Chavez-Angel, E.; Zarate, R. A.; Fuentes, S.; Guo, E. J.; Kläui, M.; Jakob, G.

    2017-01-01

    A thorough understanding of the mean-free-path (MFP) distribution of the energy carriers is crucial to engineer and tune the transport properties of materials. In this context, a significant body of work has investigated the phonon and electron MFP distribution, however, similar studies of the magnon MFP distribution have not been carried out so far. In this work, we used thickness-dependence measurements of the longitudinal spin Seebeck (LSSE) effect of yttrium iron garnet films to reconstruct the cumulative distribution of a SSE related effective magnon MFP. By using the experimental data reported by (Guo et al 2016 Phys. Rev. X 6 031012), we adapted the phonon MFP reconstruction algorithm proposed by (Minnich 2012 Phys. Rev. Lett. 109 205901) and apply it to magnons. The reconstruction showed that magnons with different MFP contribute in different manner to the total LSSE and the effective magnon MFP distribution spreads far beyond their typical averaged values.

  1. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  2. Robust reconstruction of time-resolved diffraction from ultrafast streak cameras

    PubMed Central

    Badali, Daniel S.; Dwayne Miller, R. J.

    2017-01-01

    In conjunction with ultrafast diffraction, streak cameras offer an unprecedented opportunity for recording an entire molecular movie with a single probe pulse. This is an attractive alternative to conventional pump-probe experiments and opens the door to studying irreversible dynamics. However, due to the “smearing” of the diffraction pattern across the detector, the streaking technique has thus far been limited to simple mono-crystalline samples and extreme care has been taken to avoid overlapping diffraction spots. In this article, this limitation is addressed by developing a general theory of streaking of time-dependent diffraction patterns. Understanding the underlying physics of this process leads to the development of an algorithm based on Bayesian analysis to reconstruct the time evolution of the two-dimensional diffraction pattern from a single streaked image. It is demonstrated that this approach works on diffraction peaks that overlap when streaked, which not only removes the necessity of carefully choosing the streaking direction but also extends the streaking technique to be able to study polycrystalline samples and materials with complex crystalline structures. Furthermore, it is shown that the conventional analysis of streaked diffraction can lead to erroneous interpretations of the data. PMID:28653022

  3. Non-iterative volumetric particle reconstruction near moving bodies

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra

    2017-11-01

    When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.

  4. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    NASA Astrophysics Data System (ADS)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  5. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  6. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  7. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  8. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  9. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  10. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  11. Development of X-ray CCD camera based X-ray micro-CT system

    NASA Astrophysics Data System (ADS)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  12. Increasing signal-to-noise ratio of reconstructed digital holograms by using light spatial noise portrait of camera's photosensor

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.

    2015-01-01

    Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.

  13. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  14. Algebraic Approach for Recovering Topology in Distributed Camera Networks

    DTIC Science & Technology

    2009-01-14

    not valid for camera networks. Spatial sam- pling of plenoptic function [2] from a network of cameras is rarely i.i.d. (independent and identi- cally...coverage can be used to track and compare paths in a wireless camera network without any metric calibration information. In particular, these results can...edition edition, 2000. [14] A. Rahimi, B. Dunagan, and T. Darrell. Si- multaneous calibration and tracking with a network of non-overlapping sensors. In

  15. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  16. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  17. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  18. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  19. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  20. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  1. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    PubMed Central

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  2. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  3. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  4. Tomographic reconstruction of an aerosol plume using passive multiangle observations from the MISR satellite instrument

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Davis, Anthony B.; Diner, David J.

    2016-12-01

    We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.

  5. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  6. Aircraft path planning for optimal imaging using dynamic cost functions

    NASA Astrophysics Data System (ADS)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  7. A method for measuring aircraft height and velocity using dual television cameras

    NASA Technical Reports Server (NTRS)

    Young, W. R.

    1977-01-01

    A unique electronic optical technique, consisting of two closed circuit television cameras and timing electronics, was devised to measure an aircraft's horizontal velocity and height above ground without the need for airborne cooperative devices. The system is intended to be used where the aircraft has a predictable flight path and a height of less than 660 meters (2,000 feet) at or near the end of an air terminal runway, but is suitable for greater aircraft altitudes whenever the aircraft remains visible. Two television cameras, pointed at zenith, are placed in line with the expected path of travel of the aircraft. Velocity is determined by measuring the time it takes the aircraft to travel the measured distance between cameras. Height is determined by correlating this speed with the time required to cross the field of view of either camera. Preliminary tests with a breadboard version of the system and a small model aircraft indicate the technique is feasible.

  8. Depth profile measurement with lenslet images of the plenoptic camera

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  9. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  10. Mechanical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordby, Martin; Bowden, Gordon; Foss, Mike

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less

  11. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  12. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  13. The spatial resolution of a rotating gamma camera tomographic facility.

    PubMed

    Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R

    1983-12-01

    An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.

  14. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  15. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  16. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  17. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  18. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  19. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  20. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  1. Trajectory Generation and Path Planning for Autonomous Aerobots

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Kulczycki, Eric A.; Elfes, Alberto

    2007-01-01

    This paper presents global path planning algorithms for the Titan aerobot based on user defined waypoints in 2D and 3D space. The algorithms were implemented using information obtained through a planner user interface. The trajectory planning algorithms were designed to accurately represent the aerobot's characteristics, such as minimum turning radius. Additionally, trajectory planning techniques were implemented to allow for surveying of a planar area based solely on camera fields of view, airship altitude, and the location of the planar area's perimeter. The developed paths allow for planar navigation and three-dimensional path planning. These calculated trajectories are optimized to produce the shortest possible path while still remaining within realistic bounds of airship dynamics.

  2. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  3. Measuring SO2 ship emissions with an ultra-violet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2013-11-01

    Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.

  4. [Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].

    PubMed

    Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B

    2007-01-01

    The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.

  5. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  6. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  7. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures

    NASA Astrophysics Data System (ADS)

    Perfetti, L.; Polari, C.; Fassi, F.

    2018-05-01

    Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.

  8. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  9. Mars PathFinder Rover Traverse Image

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This figure contains an azimuth-elevation projection of the 'Gallery Panorama.' The original Simple Cylindrical mosaic has been reprojected to the inside of a sphere so that lines of constant azimuth radiate from the center and lines of constant elevation are concentric circles. This projection preserves the resolution of the original panorama. Overlaid onto the projected Martian surface is a delineation of the Sojourner rover traverse path during the 83 Sols (Martian days) of Pathfinder surface operations. The rover path was reproduced using IMP camera 'end of day' and 'Rover movie' image sequences and rover vehicle telemetry data as references.

  10. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  11. Contrast enhanced computed tomography and reconstruction of hepatic vascular system for transjugular intrahepatic portal systemic shunt puncture path planning.

    PubMed

    Qin, Jian-Ping; Tang, Shan-Hong; Jiang, Ming-De; He, Qian-Wen; Chen, Hong-Bin; Yao, Xin; Zeng, Wei-Zheng; Gu, Ming

    2015-08-28

    To describe a method for the transjugular intrahepatic portal systemic shunt (TIPS) placement performed with the aid of contrast-enhanced computed tomography (CECT) and three-dimensional reconstructed vascular images (3D RVIs), and to assess its safety and effectiveness. Four hundred and ninety patients were treated with TIPS between January 2005 and December 2012. All patients underwent liver CECT and reconstruction of 3D RVIs of the right hepatic vein to portal vein (PV) prior to the operation. The 3D RVIs were carefully reviewed to plan the puncture path from the start to target points for needle pass through the PV in the TIPS procedure. The improved TIPS procedure was successful in 483 (98.6%) of the 490 patients. The number of punctures attempted was one in 294 (60%) patients, 2 to 3 in 147 (30%) patients, 4 to 6 in 25 (5.1%) patients and more than 6 in 17 (3.5%) patients. Seven patients failed. Of the 490 patients, 12 had punctures into the artery, 15 into the bile duct, eight into the gallbladder, and 18 through the liver capsule. Analysis of the portograms from the 483 successful cases indicated that the puncture points were all located distally to the PV bifurcation on anteroposterior images, while the points were located proximally to the bifurcation in the three cases with intraabdominal bleeding. The complications included three cases of bleeding, of whom one died and two needed surgery. Use of CECT and 3D RVIs to plan the puncture path for TIPS procedure is safe, simple and effective for clinical use.

  12. Topography reconstruction of specular surfaces

    NASA Astrophysics Data System (ADS)

    Kammel, Soren; Horbach, Jan

    2005-01-01

    Specular surfaces are used in a wide variety of industrial and consumer products like varnished or chrome plated parts of car bodies, dies, molds or optical components. Shape deviations of these products usually reduce their quality regarding visual appearance and/or technical performance. One reliable method to inspect such surfaces is deflectometry. It can be employed to obtain highly accurate values representing the local curvature of the surfaces. In a deflectometric measuring system, a series of illumination patterns is reflected at the specular surface and is observed by a camera. The distortions of the patterns in the acquired images contain information about the shape of the surface. This information is suited for the detection and measurement of surface defects like bumps, dents and waviness with depths in the range of a few microns. However, without additional information about the distances between the camera and each observed surface point, a shape reconstruction is only possible in some special cases. Therefore, the reconstruction approach described in this paper uses data observed from at least two different camera positions. The data obtained is used separately to estimate the local surface curvature for each camera position. From the curvature values, the epipolar geometry for the different camera positions is recovered. Matching the curvature values along the epipolar lines yields an estimate of the 3d position of the corresponding surface points. With this additional information, the deflectometric gradient data can be integrated to represent the surface topography.

  13. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  14. Dynamic Human Body Modeling Using a Single RGB Camera.

    PubMed

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  15. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  16. Resolution recovery for Compton camera using origin ensemble algorithm.

    PubMed

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions

  17. Overview in two parts: Right view showing orchard path on ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Overview in two parts: Right view showing orchard path on left eucalyptus windbreak bordering knoll on right. Camera facing 278" west. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA

  18. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  19. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  20. Slant path range gated imaging of static and moving targets

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas

    2012-06-01

    This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.

  1. Volumetric PIV with a Plenoptic Camera

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Fahringer, Tim

    2012-11-01

    Plenoptic cameras have received attention recently due to their ability to computationally refocus an image after it has been acquired. We describe the development of a robust, economical and easy-to-use volumetric PIV technique using a unique plenoptic camera built in our laboratory. The tomographic MART algorithm is used to reconstruct pairs of 3D particle volumes with velocity determined using conventional cross-correlation techniques. 3D/3C velocity measurements (volumetric dimensions of 2 . 8 ' ' × 1 . 9 ' ' × 1 . 6 ' ') of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. This work has been supported by the Air Force Office of Scientific Research,(Grant #FA9550-100100576).

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  3. Oblique along path toward structures at rear of parcel. Original ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique along path toward structures at rear of parcel. Original skinny mosaic path along edge of structures was altered (delineation can be seen in concrete) path was widened with a newer mosaic to make access to the site safer. Structures (from right) edge of Round House (with "Spring Garden"), Pencil house, Shell House, School House, wood lattice is attached to chain-link fence along north (rear) property line. These structures were all damaged by the 1994 Northridge earthquake. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  4. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  5. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  6. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  7. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  8. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  9. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  10. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  11. Dynamic Human Body Modeling Using a Single RGB Camera

    PubMed Central

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  12. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  13. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †

    PubMed Central

    Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi

    2016-01-01

    During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781

  14. Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera

    NASA Astrophysics Data System (ADS)

    Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi

    2015-12-01

    This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.

  15. Measuring SO2 ship emissions with an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  16. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  17. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  18. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  19. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  20. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  1. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  2. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    PubMed Central

    Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang

    2016-01-01

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625

  3. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  4. Combined Intra- and Extra-articular Reconstruction of the Anterior Cruciate Ligament: The Reconstruction of the Knee Anterolateral Ligament

    PubMed Central

    Helito, Camilo Partezani; Bonadio, Marcelo Batista; Gobbi, Riccardo Gomes; da Mota e Albuquerque, Roberto Freire; Pécora, José Ricardo; Camanho, Gilberto Luis; Demange, Marco Kawamura

    2015-01-01

    We present a new technique for the combined intra- and extra-articular reconstruction of the anterior cruciate ligament. Intra-articular reconstruction is performed in an outside-in manner according to the precepts of the anatomic femoral tunnel technique. Extra-articular reconstruction is performed with the gracilis tendon while respecting the anatomic parameters of the origin and insertion points and the path described for the knee anterolateral ligament. PMID:26258037

  5. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  6. Open-path TDL-Spectrometry for a Tomographic Reconstruction of 2D H2O-Concentration Fields in the Soil-Air-Boundary-Layer of Permafrost

    NASA Astrophysics Data System (ADS)

    Seidel, Anne; Wagner, Steven; Dreizler, Andreas; Ebert, Volker

    2013-04-01

    .9 ppmv?m??Hz. For absorption path lengths of up to 2 m and time resolution of 0.2 sec, we attained detection limits of 1 ppmv. Furthermore we realized a wide dynamic range covering concentrations between 200 ppmv and 12300 ppmv. The static spectrometer will now be extended to a spatially scanning TDL sensor using rapidly rotating polygon mirrors. In combination with tomographic reconstruction methods, spatially resolved 2D-fields will be measured and retrieved. The aim is to capture concentration fields with at least 1 m2 spatial coverage with concentration detection faster than 1 Hz rate. We simulated various measurements from typical concentration distributions ("phantoms") and used Algebraic Reconstruction Techniques (ART) to compute the according 2D-fields. The reconstructions look very promising and demonstrate the potential of the measurement method. In the presentation we will describe and discuss the optical setup of the stationary instrument and explain the concept of extending this instrument to a spatially scanning tomographic TDL instrument for soil studies. Further we present first results evaluating the capabilities of the selected ART reconstruction on tomographic phantoms. [1] E. Schuur, J. G. Vogel, K. G. Crummer, H. Lee, J. O. Sickman, and T. E. Osterkamp, "The effect of permafrost thaw on old carbon release and net carbon exchange from tundra.," Nature, vol. 459, no. 7246, pp. 556-9, May 2009. [2] A. Seidel, S. Wagner, and V. Ebert, "TDLAS-based open-path laser hygrometer using simple reflective foils as scattering targets," Applied Physics B, vol. 109, no. 3, pp. 497-504, Oct. 2012.

  7. Robot Tracer with Visual Camera

    NASA Astrophysics Data System (ADS)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  8. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  9. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  10. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  11. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  12. Autonomous Path Planning for On-Orbit Servicing Vehicles

    NASA Astrophysics Data System (ADS)

    McInnes, C. R.

    On-orbit servicing has long been considered as a means of reducing mission costs. While automated on-orbit servicing of satellites in LEO and GEO has yet to be realised, the International Space Station (ISS) will require servicing in a number of forms for re-supply, external visual inspection and maintenance. This paper will discuss a unified approach to path planning for such servicing vehicles using artificial potential field methods. In particular, path constrained rendezvous and docking of the ESA Automated Transfer Vehicle (ATV) at the ISS will be investigated as will mission and path planning tools for the Daimler-Chrysler Aerospace ISS Inspector free-flying camera. Future applications for free-flying microcameras and co-operative control between multiple free-flyers for on-orbit assembly will also be considered.

  13. Study and comparison of different sensitivity models for a two-plane Compton camera.

    PubMed

    Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F

    2018-06-25

    Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.

  14. Application of 3D reconstruction system in diabetic foot ulcer injury assessment

    NASA Astrophysics Data System (ADS)

    Li, Jun; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    To deal with the considerable deviation of transparency tracing method and digital planimetry method used in current clinical diabetic foot ulcer injury assessment, this paper proposes a 3D reconstruction system which can be used to get foot model with good quality texture, then injury assessment is done by measuring the reconstructed model. The system uses the Intel RealSense SR300 depth camera which is based on infrared structured-light as input device, the required data from different view is collected by moving the camera around the scanned object. The geometry model is reconstructed by fusing the collected data, then the mesh is sub-divided to increase the number of mesh vertices and the color of each vertex is determined using a non-linear optimization, all colored vertices compose the surface texture of the reconstructed model. Experimental results indicate that the reconstructed model has millimeter-level geometric accuracy and texture with few artificial effect.

  15. A fast algorithm for computer aided collimation gamma camera (CACAO)

    NASA Astrophysics Data System (ADS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Franck, D.; Pihet, P.; Ballongue, P.

    2000-08-01

    The computer aided collimation gamma camera is aimed at breaking down the resolution sensitivity trade-off of the conventional parallel hole collimator. It uses larger and longer holes, having an added linear movement at the acquisition sequence. A dedicated algorithm including shift and sum, deconvolution, parabolic filtering and rotation is described. Examples of reconstruction are given. This work shows that a simple and fast algorithm, based on a diagonal dominant approximation of the problem can be derived. Its gives a practical solution to the CACAO reconstruction problem.

  16. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  17. Development of a Compton camera for prompt-gamma medical imaging

    NASA Astrophysics Data System (ADS)

    Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.

    2017-11-01

    A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.

  18. Simultaneous Water Vapor and Dry Air Optical Path Length Measurements and Compensation with the Large Binocular Telescope Interferometer

    NASA Technical Reports Server (NTRS)

    Defrere, D.; Hinz, P.; Downey, E.; Boehm, M.; Danchi, W. C.; Durney, O.; Ertel, S.; Hill, J. M.; Hoffmann, W. F.; Mennesson, B.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI/MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current

  19. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  20. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  1. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Nakabayashi, S; Kumagai, S

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less

  2. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  3. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  4. Cloud cameras at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Winnick, Michael G.

    2010-06-01

    This thesis presents the results of measurements made by infrared cloud cameras installed at the Pierre Auger Observatory in Argentina. These cameras were used to record cloud conditions during operation of the observatory's fluorescence detectors. As cloud may affect the measurement of fluorescence from cosmic ray extensive air showers, the cloud cameras provide a record of which measurements have been interfered with by cloud. Several image processing algorithms were developed, along with a methodology for the detection of cloud within infrared images taken by the cloud cameras. A graphical user interface (GUI) was developed to expediate this, as a large number of images need to be checked for cloud. A cross-check between images recorded by three of the observatory's cloud cameras is presented, along with a comparison with independent cloud measurements made by LIDAR. Despite the cloud cameras and LIDAR observing different areas of the sky, a good agreement is observed in the measured cloud fraction between the two instruments, particularly on very clear and overcast nights. Cloud information recorded by the cloud cameras, with cloud height information measured by the LIDAR, was used to identify those extensive air showers that were obscured by cloud. These events were used to study the effectiveness of standard quality cuts at removing cloud afflicted events. Of all of the standard quality cuts studied in this thesis, the LIDAR cloud fraction cut was the most effective at preferentially removing cloud obscured events. A 'cloudy pixel' veto is also presented, whereby cloud obscured measurements are excluded during the standard hybrid analysis, and new extensive air shower reconstructed parameters determined. The application of such a veto would provide a slight increase to the number of events available for higher level analysis.

  5. Reconstruction dynamics of recorded holograms in photochromic glass.

    PubMed

    Mihailescu, Mona; Pavel, Eugen; Nicolae, Vasile B

    2011-06-20

    We have investigated the dynamics of the record-erase process of holograms in photochromic glass using continuum Nd:YVO₄ laser radiation (λ=532 nm). A bidimensional microgrid pattern was formed and visualized in photochromic glass, and its diffraction efficiency decay versus time (during reconstruction step) gave us information (D, Δn) about the diffusion process inside the material. The recording and reconstruction processes were carried out in an off-axis setup, and the images of the reconstructed object were recorded by a CCD camera. Measurements realized on reconstructed object images using holograms recorded at a different incident power laser have shown a two-stage process involved in silver atom kinetics.

  6. A little-known 3-lens Catadioptric Camera by Bernard Schmidt

    NASA Astrophysics Data System (ADS)

    Busch, Wolfgang; Ceragioli, Roger C.; Stephani, Walter

    2013-07-01

    The authors investigate a prototype 3-lens f/1 catadioptric camera, built in 1934 by the famous optician Bernhard Schmidt at the Hamburg-Bergedorf Observatory in Germany, where Schmidt worked before his death in 1935. The prototype is in the observatory's collection of Schmidt artifacts, but its nature was not understood before the authors' recent examination. It is an astronomical camera of a form known as 'Buchroeder-Houghton', consisting of a spherical mirror and a 3-element afocal corrector lens placed at the mirror's center of curvature. The design is named for R.A. Buchroeder and J.L. Houghton who independently published this and related forms of wide-field spherical-lens cameras after 1942. Schmidt died before he could publish his own design. The authors disassembled the prototype and measured its optical parameters. These they present together with a transmission test of the corrector lens. The authors also consider the theoretical performance of the design as built, the theory of Houghton cameras, Schmidt's possible path to his invention, and the place of the prototype in his scientific output.

  7. Tropospheric wet refractivity tomography using multiplicative algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Xiaoying, Wang; Ziqiang, Dai; Enhong, Zhang; Fuyang, K. E.; Yunchang, Cao; Lianchun, Song

    2014-01-01

    Algebraic reconstruction techniques (ART) have been successfully used to reconstruct the total electron content (TEC) of the ionosphere and in recent years be tentatively used in tropospheric wet refractivity and water vapor tomography in the ground-based GNSS technology. The previous research on ART used in tropospheric water vapor tomography focused on the convergence and relaxation parameters for various algebraic reconstruction techniques and rarely discussed the impact of Gaussian constraints and initial field on the iteration results. The existing accuracy evaluation parameters calculated from slant wet delay can only evaluate the resultant precision of the voxels penetrated by slant paths and cannot evaluate that of the voxels not penetrated by any slant path. The paper proposes two new statistical parameters Bias and RMS, calculated from wet refractivity of the total voxels, to improve the deficiencies of existing evaluation parameters and then discusses the effect of the Gaussian constraints and initial field on the convergence and tomography results in multiplicative algebraic reconstruction technique (MART) to reconstruct the 4D tropospheric wet refractivity field using simulation method.

  8. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  9. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  10. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  11. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  12. PathFinder: reconstruction and dynamic visualization of metabolic pathways.

    PubMed

    Goesmann, Alexander; Haubrock, Martin; Meyer, Folker; Kalinowski, Jörn; Giegerich, Robert

    2002-01-01

    Beyond methods for a gene-wise annotation and analysis of sequenced genomes new automated methods for functional analysis on a higher level are needed. The identification of realized metabolic pathways provides valuable information on gene expression and regulation. Detection of incomplete pathways helps to improve a constantly evolving genome annotation or discover alternative biochemical pathways. To utilize automated genome analysis on the level of metabolic pathways new methods for the dynamic representation and visualization of pathways are needed. PathFinder is a tool for the dynamic visualization of metabolic pathways based on annotation data. Pathways are represented as directed acyclic graphs, graph layout algorithms accomplish the dynamic drawing and visualization of the metabolic maps. A more detailed analysis of the input data on the level of biochemical pathways helps to identify genes and detect improper parts of annotations. As an Relational Database Management System (RDBMS) based internet application PathFinder reads a list of EC-numbers or a given annotation in EMBL- or Genbank-format and dynamically generates pathway graphs.

  13. Automatic Camera Orientation and Structure Recovery with Samantha

    NASA Astrophysics Data System (ADS)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  14. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    NASA Astrophysics Data System (ADS)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  15. Using refraction in thick glass plates for optical path length modulation in low coherence interferometry.

    PubMed

    Kröger, Niklas; Schlobohm, Jochen; Pösch, Andreas; Reithmeier, Eduard

    2017-09-01

    In Michelson interferometer setups the standard way to generate different optical path lengths between a measurement arm and a reference arm relies on expensive high precision linear stages such as piezo actuators. We present an alternative approach based on the refraction of light at optical interfaces using a cheap stepper motor with high gearing ratio to control the rotation of a glass plate. The beam path is examined and a relation between angle of rotation and change in optical path length is devised. As verification, an experimental setup is presented, and reconstruction results from a measurement standard are shown. The reconstructed step height from this setup lies within 1.25% of the expected value.

  16. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  17. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  18. Hyperspectral imaging using a color camera and its application for pathogen detection

    USDA-ARS?s Scientific Manuscript database

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  19. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  20. Novel, full 3D scintillation dosimetry using a static plenoptic camera.

    PubMed

    Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis

    2014-08-01

    Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm(3) EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle(3) was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Using plenoptic camera technology, the authors were able to

  1. Novel, full 3D scintillation dosimetry using a static plenoptic camera

    PubMed Central

    Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis

    2014-01-01

    Purpose: Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Methods: A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm3 EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. Results: The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle3 was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Conclusions: Using plenoptic camera

  2. A general Bayesian image reconstruction algorithm with entropy prior: Preliminary application to HST data

    NASA Astrophysics Data System (ADS)

    Nunez, Jorge; Llacer, Jorge

    1993-10-01

    This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.

  3. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  4. Indoor calibration for stereoscopic camera STC: a new method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  5. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  6. Radiometric calibration of wide-field camera system with an application in astronomy

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  7. A single pixel camera video ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  8. Evaluation of guidewire path reproducibility

    PubMed Central

    Schafer, Sebastian; Hoffmann, Kenneth R.; Noël, Peter B.; Ionita, Ciprian N.; Dmochowski, Jacek

    2008-01-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters ∼4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (5123, 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17±0.02 mm (plastic-coated guidewire), 0.73±0.55 mm (steel guidewire) and 1.15±0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position. PMID:18561663

  9. Developing a Method to Test the Validity of 24 Hour Time Use Diaries Using Wearable Cameras: A Feasibility Pilot

    PubMed Central

    Kelly, Paul; Thomas, Emma; Doherty, Aiden; Harms, Teresa; Burke, Órlaith; Gershuny, Jonathan; Foster, Charlie

    2015-01-01

    Self-report time use diaries collect a continuous sequenced record of daily activities but the validity of the data they produce is uncertain. This study tests the feasibility of using wearable cameras to generate, through image prompted interview, reconstructed 'near-objective' data to assess their validity. 16 volunteers completed the Harmonised European Time Use Survey (HETUS) diary and used an Autographer wearable camera (recording images at approximately 15 second intervals) for the waking hours of the same 24-hour period. Participants then completed an interview in which visual images were used as prompts to reconstruct a record of activities for comparison with the diary record. 14 participants complied with the full collection protocol. We compared time use and number of discrete activities from the diary and camera records (using 10 classifications of activity). In terms of aggregate totals of daily time use we found no significant difference between the diary and camera data. In terms of number of discrete activities, participants reported a mean of 19.2 activities per day in the diaries, while image prompted interviews revealed 41.1 activities per day. The visualisations of the individual activity sequences reveal some potentially important differences between the two record types, which will be explored at the next project stage. This study demonstrates the feasibility of using wearable cameras to reconstruct time use through image prompted interview in order to test the concurrent validity of 24-hour activity time-use budgets. In future we need a suitably powered study to assess the validity and reliability of 24-hour time use diaries. PMID:26633807

  10. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  11. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  12. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  13. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  14. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  15. Simultaneous water vapor and dry air optical path length measurements and compensation with the large binocular telescope interferometer

    NASA Astrophysics Data System (ADS)

    Defrère, D.; Hinz, P.; Downey, E.; Böhm, M.; Danchi, W. C.; Durney, O.; Ertel, S.; Hill, J. M.; Hoffmann, W. F.; Mennesson, B.; Millan-Gabet, R.; Montoya, M.; Pott, J.-U.; Skemer, A.; Spalding, E.; Stone, J.; Vaz, A.

    2016-08-01

    The Large Binocular Telescope Interferometer uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI/MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feedforward approach to stabilize the path length fluctuations seen by the LBTI nuller.

  16. Challenges in Flying Quadrotor Unmanned Aerial Vehicle for 3d Indoor Reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, J.; Grasso, N.; Zlatanova, S.; Braggaar, R. C.; Marx, D. B.

    2017-09-01

    Three-dimensional modelling plays a vital role in indoor 3D tracking, navigation, guidance and emergency evacuation. Reconstruction of indoor 3D models is still problematic, in part, because indoor spaces provide challenges less-documented than their outdoor counterparts. Challenges include obstacles curtailing image and point cloud capture, restricted accessibility and a wide array of indoor objects, each with unique semantics. Reconstruction of indoor environments can be achieved through a photogrammetric approach, e.g. by using image frames, aligned using recurring corresponding image points (CIP) to build coloured point clouds. Our experiments were conducted by flying a QUAV in three indoor environments and later reconstructing 3D models which were analysed under different conditions. Point clouds and meshes were created using Agisoft PhotoScan Professional. We concentrated on flight paths from two vantage points: 1) safety and security while flying indoors and 2) data collection needed for reconstruction of 3D models. We surmised that the main challenges in providing safe flight paths are related to the physical configuration of indoor environments, privacy issues, the presence of people and light conditions. We observed that the quality of recorded video used for 3D reconstruction has a high dependency on surface materials, wall textures and object types being reconstructed. Our results show that 3D indoor reconstruction predicated on video capture using a QUAV is indeed feasible, but close attention should be paid to flight paths and conditions ultimately influencing the quality of 3D models. Moreover, it should be decided in advance which objects need to be reconstructed, e.g. bare rooms or detailed furniture.

  17. System Architecture of the Dark Energy Survey Camera Readout Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Theresa; /FERMILAB; Ballester, Otger

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less

  18. Live event reconstruction in an optically read out GEM-based TPC

    NASA Astrophysics Data System (ADS)

    Brunbauer, F. M.; Galgóczi, G.; Gonzalez Diaz, D.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.

    2018-04-01

    Combining strong signal amplification made possible by Gaseous Electron Multipliers (GEMs) with the high spatial resolution provided by optical readout, highly performing radiation detectors can be realized. An optically read out GEM-based Time Projection Chamber (TPC) is presented. The device permits 3D track reconstruction by combining the 2D projections obtained with a CCD camera with timing information from a photomultiplier tube. Owing to the intuitive 2D representation of the tracks in the images and to automated control, data acquisition and event reconstruction algorithms, the optically read out TPC permits live display of reconstructed tracks in three dimensions. An Ar/CF4 (80/20%) gas mixture was used to maximize scintillation yield in the visible wavelength region matching the quantum efficiency of the camera. The device is integrated in a UHV-grade vessel allowing for precise control of the gas composition and purity. Long term studies in sealed mode operation revealed a minor decrease in the scintillation light intensity.

  19. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  20. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  1. Performance verification of the FlashCam prototype camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Werner, F.; Bauer, C.; Bernhard, S.; Capasso, M.; Diebold, S.; Eisenkolb, F.; Eschbach, S.; Florin, D.; Föhr, C.; Funk, S.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Lahmann, R.; Marszalek, A.; Pfeifer, M.; Principe, G.; Pühlhofer, G.; Pürckhauer, S.; Rajda, P. J.; Reimer, O.; Santangelo, A.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Wolf, D.; Zietara, K.; CTA Consortium

    2017-12-01

    The Cherenkov Telescope Array (CTA) is a future gamma-ray observatory that is planned to significantly improve upon the sensitivity and precision of the current generation of Cherenkov telescopes. The observatory will consist of several dozens of telescopes with different sizes and equipped with different types of cameras. Of these, the FlashCam camera system is the first to implement a fully digital signal processing chain which allows for a traceable, configurable trigger scheme and flexible signal reconstruction. As of autumn 2016, a prototype FlashCam camera for the medium-sized telescopes of CTA nears completion. First results of the ongoing system tests demonstrate that the signal chain and the readout system surpass CTA requirements. The stability of the system is shown using long-term temperature cycling.

  2. Etalon Array Reconstructive Spectrometry

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2017-01-01

    Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed.

  3. Single-camera three-dimensional tracking of natural particulate and zooplankton

    NASA Astrophysics Data System (ADS)

    Troutman, Valerie A.; Dabiri, John O.

    2018-07-01

    We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm  ×  10 cm  ×  24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.

  4. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    NASA Astrophysics Data System (ADS)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  5. Full color natural light holographic camera.

    PubMed

    Kim, Myung K

    2013-04-22

    Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.

  6. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    PubMed Central

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  7. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halama, J.

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT

  8. A complete system for 3D reconstruction of roots for phenotypic analysis.

    PubMed

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  9. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  10. Joint reconstruction of multiview compressed images.

    PubMed

    Thirumalai, Vijayaraghavan; Frossard, Pascal

    2013-05-01

    Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.

  11. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  12. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  13. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  14. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    PubMed

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  15. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  16. Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.

    2006-01-01

    The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.

  17. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be

  18. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  19. Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin

    2015-12-01

    In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.

  20. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  1. Tomographic reconstruction of heat release rate perturbations induced by helical modes in turbulent swirl flames

    NASA Astrophysics Data System (ADS)

    Moeck, Jonas P.; Bourgouin, Jean-François; Durox, Daniel; Schuller, Thierry; Candel, Sébastien

    2013-04-01

    Swirl flows with vortex breakdown are widely used in industrial combustion systems for flame stabilization. This type of flow is known to sustain a hydrodynamic instability with a rotating helical structure, one common manifestation of it being the precessing vortex core. The role of this unsteady flow mode in combustion is not well understood, and its interaction with combustion instabilities and flame stabilization remains unclear. It is therefore important to assess the structure of the perturbation in the flame that is induced by this helical mode. Based on principles of tomographic reconstruction, a method is presented to determine the 3-D distribution of the heat release rate perturbation associated with the helical mode. Since this flow instability is rotating, a phase-resolved sequence of projection images of light emitted from the flame is identical to the Radon transform of the light intensity distribution in the combustor volume and thus can be used for tomographic reconstruction. This is achieved with one stationary camera only, a vast reduction in experimental and hardware requirements compared to a multi-camera setup or camera repositioning, which is typically required for tomographic reconstruction. Different approaches to extract the coherent part of the oscillation from the images are discussed. Two novel tomographic reconstruction algorithms specifically tailored to the structure of the heat release rate perturbations related to the helical mode are derived. The reconstruction techniques are first applied to an artificial field to illustrate the accuracy. High-speed imaging data acquired in a turbulent swirl-stabilized combustor setup with strong helical mode oscillations are then used to reconstruct the 3-D structure of the associated perturbation in the flame.

  2. Digital reconstruction of Young's fringes using Fresnel transformation

    NASA Astrophysics Data System (ADS)

    Kulenovic, Rudi; Song, Yaozu; Renninger, P.; Groll, Manfred

    1997-11-01

    This paper deals with the digital numerical reconstruction of Young's fringes from laser speckle photography by means of the Fresnel-transformation. The physical model of the optical reconstruction of a specklegram is a near-field Fresnel-diffraction phenomenon which can be mathematically described by the Fresnel-transformation. Therefore, the interference phenomena can be directly calculated by a microcomputer.If additional a CCD-camera is used for specklegram recording the measurement procedure and evaluation process can be completely carried out in a digital way. Compared with conventional laser speckle photography no holographic plates, no wet development process and no optical specklegram reconstruction are needed. These advantages reveal a wide future in scientific and engineering applications. The basic principle of the numerical reconstruction is described, the effects of experimental parameters of Young's fringes are analyzed and representative results are presented.

  3. 3D reconstruction of hollow parts analyzing images acquired by a fiberscope

    NASA Astrophysics Data System (ADS)

    Icasio-Hernández, Octavio; Gonzalez-Barbosa, José-Joel; Hurtado-Ramos, Juan B.; Viliesid-Alonso, Miguel

    2014-07-01

    A modified fiberscope used to reconstruct difficult-to-reach inner structures is presented. By substituting the fiberscope’s original illumination system, we can project a profile-revealing light line inside the object of study. The light line is obtained using a sandwiched power light-emitting diode (LED) attached to an extension arm on the tip of the fiberscope. Profile images from the interior of the object are then captured by a camera attached to the fiberscope’s eyepiece. Using a series of those images at different positions, the system is capable of generating a 3D reconstruction of the object with submillimeter accuracy. Also proposed is the use of a combination of known filters to remove the honeycomb structures produced by the fiberscope and the use of ring gages to obtain the extrinsic parameters of the camera attached to the fiberscope and the metrological traceability of the system. Several standard ring diameter measurements were compared against their certified values to improve the accuracy of the system. To exemplify an application, a 3D reconstruction of the interior of a refrigerator duct was conducted. This reconstruction includes accuracy assessment by comparing the measurements of the system to a coordinate measuring machine. The system, as described, is capable of 3D reconstruction of the interior of objects with uniform and non-uniform profiles from 10 to 60 mm in transversal dimensions and a depth of 1000 mm if the material of the walls of the object is translucent and allows the detection of the power LED light from the exterior through the wall. If this is not possible, we propose the use of a magnetic scale which reduces the working depth to 170 mm. The assessed accuracy is around ±0.15 mm in 2D cross-section reconstructions and ±1.3 mm in 1D position using a magnetic scale, and ±0.5 mm using a CCD camera.

  4. Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.

    PubMed

    Walstead, Sean P; Deane, Grant B

    2014-08-01

    Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.

  5. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  6. First Test Of A New High Resolution Positron Camera With Four Area Detectors

    NASA Astrophysics Data System (ADS)

    van Laethem, E.; Kuijk, M.; Deconinck, Frank; van Miert, M.; Defrise, Michel; Townsend, D.; Wensveen, M.

    1989-10-01

    A PET camera consisting of two pairs of parallel area detectors has been installed at the cyclotron unit of VUB. The detectors are High Density Avalanche Chambers (HIDAC) wire-chambers with a stack of 4 or 6 lead gamma-electron converters, the sensitive area being 30 by 30 cm. The detectors are mounted on a commercial gantry allowing a 180 degree rotation during acquisition, as needed for a fully 3D image reconstruction. The camera has been interfaced to a token-ring computer network consisting of 5 workstations among which the various tasks (acquisition, reconstruction, display) can be distributed. Each coincident event is coded in 48 bits and is transmitted to the computer bus via a 512 kbytes dual ported buffer memory allowing data rates of up to 50 kHz. Fully 3D image reconstruction software has been developed, and includes new reconstruction algorithms allowing a better utilization of the available projection data. Preliminary measurements and imaging of phantoms and small animals (with 18FDG) have been performed with two of the four detectors mounted on the gantry. They indicate the expected 3D isotropic spatial resolution of 3.5 mm (FWHM, line source in air) and a sensitivity of 4 cps/μCi for a centred point source in air, corresponding to typical data rates of a few kHz. This latter figure is expected to improve by a factor of 4 after coupling of the second detector pair, since the coincidence sensitivity of this second detector pair is a factor 3 higher than that of the first one.

  7. Iterative reconstruction of volumetric particle distribution

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  8. 3D reconstruction and analysis of wing deformation in free-flying dragonflies.

    PubMed

    Koehler, Christopher; Liang, Zongxian; Gaston, Zachary; Wan, Hui; Dong, Haibo

    2012-09-01

    Insect wings demonstrate elaborate three-dimensional deformations and kinematics. These deformations are key to understanding many aspects of insect flight including aerodynamics, structural dynamics and control. In this paper, we propose a template-based subdivision surface reconstruction method that is capable of reconstructing the wing deformations and kinematics of free-flying insects based on the output of a high-speed camera system. The reconstruction method makes no rigid wing assumptions and allows for an arbitrary arrangement of marker points on the interior and edges of each wing. The resulting wing surfaces are projected back into image space and compared with expert segmentations to validate reconstruction accuracy. A least squares plane is then proposed as a universal reference to aid in making repeatable measurements of the reconstructed wing deformations. Using an Eastern pondhawk (Erythimus simplicicollis) dragonfly for demonstration, we quantify and visualize the wing twist and camber in both the chord-wise and span-wise directions, and discuss the implications of the results. In particular, a detailed analysis of the subtle deformation in the dragonfly's right hindwing suggests that the muscles near the wing root could be used to induce chord-wise camber in the portion of the wing nearest the specimen's body. We conclude by proposing a novel technique for modeling wing corrugation in the reconstructed flapping wings. In this method, displacement mapping is used to combine wing surface details measured from static wings with the reconstructed flapping wings, while not requiring any additional information be tracked in the high speed camera output.

  9. Counting neutrons with a commercial S-CMOS camera

    NASA Astrophysics Data System (ADS)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  10. Transport-aware imaging

    NASA Astrophysics Data System (ADS)

    Kutulakos, Kyros N.; O'Toole, Matthew

    2015-03-01

    Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.

  11. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  12. Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms

    PubMed Central

    Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg

    2013-01-01

    Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387

  13. 3D Surface Reconstruction of Rills in a Spanish Olive Grove

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.

    2016-04-01

    The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.

  14. Measuring the performance of super-resolution reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet

    2012-06-01

    For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.

  15. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  16. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  17. Automated Reconstruction of Neural Trees Using Front Re-initialization

    PubMed Central

    Mukherjee, Amit; Stepanyants, Armen

    2013-01-01

    This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539

  18. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. A low-count reconstruction algorithm for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei

    2018-04-01

    The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.

  20. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the

  1. Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity

    NASA Astrophysics Data System (ADS)

    Iraji, D.; Furno, I.; Fasoli, A.; Theiler, C.

    2010-12-01

    In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique using a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (Te)α(ne)β, in which α =0.25-0.7 and β =0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 μs of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The ω-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the ω and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be detected and tracked in the reconstructed emissivity movies

  2. Imaging of turbulent structures and tomographic reconstruction of TORPEX plasma emissivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iraji, D.; Furno, I.; Fasoli, A.

    In the TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], a simple magnetized plasma device, low frequency electrostatic fluctuations associated with interchange waves, are routinely measured by means of extensive sets of Langmuir probes. To complement the electrostatic probe measurements of plasma turbulence and study of plasma structures smaller than the spatial resolution of probes array, a nonperturbative direct imaging system has been developed on TORPEX, including a fast framing Photron-APX-RS camera and an image intensifier unit. From the line-integrated camera images, we compute the poloidal emissivity profile of the plasma by applying a tomographic reconstruction technique usingmore » a pixel method and solving an overdetermined set of equations by singular value decomposition. This allows comparing statistical, spectral, and spatial properties of visible light radiation with electrostatic fluctuations. The shape and position of the time-averaged reconstructed plasma emissivity are observed to be similar to those of the ion saturation current profile. In the core plasma, excluding the electron cyclotron and upper hybrid resonant layers, the mean value of the plasma emissivity is observed to vary with (T{sub e}){sup {alpha}}(n{sub e}){sup {beta}}, in which {alpha}=0.25-0.7 and {beta}=0.8-1.4, in agreement with collisional radiative model. The tomographic reconstruction is applied to the fast camera movie acquired with 50 kframes/s rate and 2 {mu}s of exposure time to obtain the temporal evolutions of the emissivity fluctuations. Conditional average sampling is also applied to visualize and measure sizes of structures associated with the interchange mode. The {omega}-time and the two-dimensional k-space Fourier analysis of the reconstructed emissivity fluctuations show the same interchange mode that is detected in the {omega} and k spectra of the ion saturation current fluctuations measured by probes. Small scale turbulent plasma structures can be

  3. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  5. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  6. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  8. Reconstructed images of 4 Vesta.

    NASA Astrophysics Data System (ADS)

    Drummond, J.; Eckart, A.; Hege, E. K.

    The first glimpses of an asteroid's surface have been obtained from images of 4 Vesta reconstructed from speckle interferometric observations made with Harvard's PAPA camera coupled to Steward Observatory's 2.3 m telescope. Vesta is found to have a "normal" triaxial ellipsoid shape of 566(±15)×532(±15)×466(±15) km. Its rotational pole lies within 4° of ecliptic long. 327°, lat. = +55°. Reconstructed images obtained with the power spectra and Knox-Thompson cross-spectra reveal dark and bright patterns, reminiscent of the Moon. Three bright and three dark areas are visible, and when combined with an inferred seventh bright region not visible during the rotational phases covered during the authors' run, lead to lightcurves that match Vesta's lightcurve history.

  9. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    NASA Astrophysics Data System (ADS)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  10. Quality Assessment of 3d Reconstruction Using Fisheye and Perspective Sensors

    NASA Astrophysics Data System (ADS)

    Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L.

    2015-03-01

    Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  11. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  12. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  13. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  14. 3D surface reconstruction for laparoscopic computer-assisted interventions: comparison of state-of-the-art methods

    NASA Astrophysics Data System (ADS)

    Groch, A.; Seitel, A.; Hempel, S.; Speidel, S.; Engelbrecht, R.; Penne, J.; Höller, K.; Röhl, S.; Yung, K.; Bodenstedt, S.; Pflaum, F.; dos Santos, T. R.; Mersmann, S.; Meinzer, H.-P.; Hornegger, J.; Maier-Hein, L.

    2011-03-01

    One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organ's surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.

  15. A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research

    PubMed Central

    Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.

    2013-01-01

    Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926

  16. Tomographic reconstruction of tracer gas concentration profiles in a room with the use of a single OP-FTIR and two iterative algorithms: ART and PWLS.

    PubMed

    Park, D Y; Fessler, J A; Yost, M G; Levine, S P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  17. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    PubMed

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  18. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2016-01-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089

  19. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  20. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  1. MODIFIED PATH METHODOLOGY FOR OBTAINING INTERVAL-SCALED POSTURAL ASSESSMENTS OF FARMWORKERS.

    PubMed

    Garrison, Emma B; Dropkin, Jonathan; Russell, Rebecca; Jenkins, Paul

    2018-01-29

    Agricultural workers perform tasks that frequently require awkward and extreme postures that are associated with musculoskeletal disorders (MSDs). The PATH (Posture, Activity, Tools, Handling) system currently provides a sound methodology for quantifying workers' exposure to these awkward postures on an ordinal scale of measurement, which places restrictions on the choice of analytic methods. This study reports a modification of the PATH methodology that instead captures these postures as degrees of flexion, an interval-scaled measurement. Rather than making live observations in the field, as in PATH, the postural assessments were performed on photographs using ImageJ photo analysis software. Capturing the postures in photographs permitted more careful measurement of the degrees of flexion. The current PATH methodology requires that the observer in the field be trained in the use of PATH, whereas the single photographer used in this modification requires only sufficient training to maintain the proper camera angle. Ultimately, these interval-scale measurements could be combined with other quantitative measures, such as those produced by electromyograms (EMGs), to provide more sophisticated estimates of future risk for MSDs. Further, these data can provide a baseline from which the effects of interventions designed to reduce hazardous postures can be calculated with greater precision. Copyright© by the American Society of Agricultural Engineers.

  2. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  3. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  4. Development of a Remote Accessibility Assessment System through three-dimensional reconstruction technology.

    PubMed

    Kim, Jong Bae; Brienza, David M

    2006-01-01

    A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.

  5. The first demonstration of the concept of "narrow-FOV Si/CdTe semiconductor Compton camera"

    NASA Astrophysics Data System (ADS)

    Ichinohe, Yuto; Uchida, Yuusuke; Watanabe, Shin; Edahiro, Ikumi; Hayashi, Katsuhiro; Kawano, Takafumi; Ohno, Masanori; Ohta, Masayuki; Takeda, Shin`ichiro; Fukazawa, Yasushi; Katsuragawa, Miho; Nakazawa, Kazuhiro; Odaka, Hirokazu; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Yuasa, Takayuki

    2016-01-01

    The Soft Gamma-ray Detector (SGD), to be deployed on board the ASTRO-H satellite, has been developed to provide the highest sensitivity observations of celestial sources in the energy band of 60-600 keV by employing a detector concept which uses a Compton camera whose field-of-view is restricted by a BGO shield to a few degree (narrow-FOV Compton camera). In this concept, the background from outside the FOV can be heavily suppressed by constraining the incident direction of the gamma ray reconstructed by the Compton camera to be consistent with the narrow FOV. We, for the first time, demonstrate the validity of the concept using background data taken during the thermal vacuum test and the low-temperature environment test of the flight model of SGD on ground. We show that the measured background level is suppressed to less than 10% by combining the event rejection using the anti-coincidence trigger of the active BGO shield and by using Compton event reconstruction techniques. More than 75% of the signals from the field-of-view are retained against the background rejection, which clearly demonstrates the improvement of signal-to-noise ratio. The estimated effective area of 22.8 cm2 meets the mission requirement even though not all of the operational parameters of the instrument have been fully optimized yet.

  6. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  7. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  8. Observations of the Perseids 2012 using SPOSH cameras

    NASA Astrophysics Data System (ADS)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  9. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  10. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  12. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  13. Foreground extraction for moving RGBD cameras

    NASA Astrophysics Data System (ADS)

    Junejo, Imran N.; Ahmed, Naveed

    2017-02-01

    In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.

  14. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  16. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  17. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  18. Photometric Lunar Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  19. Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas

    NASA Astrophysics Data System (ADS)

    Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki

    2017-08-01

    Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.

  20. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    NASA Astrophysics Data System (ADS)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  1. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  2. Single camera volumetric velocimetry in aortic sinus with a percutaneous valve

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit

    2016-11-01

    Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.

  3. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  4. D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.

    2017-11-01

    The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.

  5. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law

    PubMed Central

    Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen

    2015-01-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. PMID:26433027

  6. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Construct and face validity of a virtual reality-based camera navigation curriculum.

    PubMed

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved

  9. Characterization of the reference wave in a compact digital holographic camera.

    PubMed

    Park, I S; Middleton, R J C; Coggrave, C R; Ruiz, P D; Coupland, J M

    2018-01-01

    A hologram is a recording of the interference between an unknown object wave and a coherent reference wave. Providing the object and reference waves are sufficiently separated in some region of space and the reference beam is known, a high-fidelity reconstruction of the object wave is possible. In traditional optical holography, high-quality reconstruction is achieved by careful reillumination of the holographic plate with the exact same reference wave that was used at the recording stage. To reconstruct high-quality digital holograms the exact parameters of the reference wave must be known mathematically. This paper discusses a technique that obtains the mathematical parameters that characterize a strongly divergent reference wave that originates from a fiber source in a new compact digital holographic camera. This is a lensless design that is similar in principle to a Fourier hologram, but because of the large numerical aperture, the usual paraxial approximations cannot be applied and the Fourier relationship is inexact. To characterize the reference wave, recordings of quasi-planar object waves are made at various angles of incidence using a Dammann grating. An optimization process is then used to find the reference wave that reconstructs a stigmatic image of the object wave regardless of the angle of incidence.

  10. Compton camera study for high efficiency SPECT and benchmark with Anger system

    NASA Astrophysics Data System (ADS)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  11. Cost effective system for monitoring of fish migration with a camera

    NASA Astrophysics Data System (ADS)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  12. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  13. Single-camera displacement field correlation method for centrosymmetric 3D dynamic deformation measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin

    2018-05-01

    Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.

  14. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  15. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  16. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    PubMed

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the

  17. Preoperative transcutaneous electrical nerve stimulation for localizing superficial nerve paths.

    PubMed

    Natori, Yuhei; Yoshizawa, Hidekazu; Mizuno, Hiroshi; Hayashi, Ayato

    2015-12-01

    During surgery, peripheral nerves are often seen to follow unpredictable paths because of previous surgeries and/or compression caused by a tumor. Iatrogenic nerve injury is a serious complication that must be avoided, and preoperative evaluation of nerve paths is important for preventing it. In this study, transcutaneous electrical nerve stimulation (TENS) was used for an in-depth analysis of peripheral nerve paths. This study included 27 patients who underwent the TENS procedure to evaluate the peripheral nerve path (17 males and 10 females; mean age: 59.9 years, range: 18-83 years) of each patient preoperatively. An electrode pen coupled to an electrical nerve stimulator was used for superficial nerve mapping. The TENS procedure was performed on patients' major peripheral nerves that passed close to the surgical field of tumor resection or trauma surgery, and intraoperative damage to those nerves was apprehensive. The paths of the target nerve were detected in most patients preoperatively. The nerve paths of 26 patients were precisely under the markings drawn preoperatively. The nerve path of one patient substantially differed from the preoperative markings with numbness at the surgical region. During surgery, the nerve paths could be accurately mapped preoperatively using the TENS procedure as confirmed by direct visualization of the nerve. This stimulation device is easy to use and offers highly accurate mapping of nerves for surgical planning without major complications. The authors conclude that TENS is a useful tool for noninvasive nerve localization and makes tumor resection a safe and smooth procedure. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooling, J. C.; Lumpkin, A. H.

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. Themore » optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.« less

  19. Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation

    NASA Astrophysics Data System (ADS)

    Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.

    2018-05-01

    A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  20. Development of the geoCamera, a System for Mapping Ice from a Ship

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  1. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    2017-01-05

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  2. Single-view 3D reconstruction of correlated gamma-neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara A.

    We describe a new method of 3D image reconstruction of neutron sources that emit correlated gammas (e.g. Cf- 252, Am-Be). This category includes a vast majority of neutron sources important in nuclear threat search, safeguards and non-proliferation. Rather than requiring multiple views of the source this technique relies on the source’s intrinsic property of coincidence gamma and neutron emission. As a result only a single-view measurement of the source is required to perform the 3D reconstruction. In principle, any scatter camera sensitive to gammas and neutrons with adequate timing and interaction location resolution can perform this reconstruction. Using a neutronmore » double scatter technique, we can calculate a conical surface of possible source locations. By including the time to a correlated gamma we further constrain the source location in three-dimensions by solving for the source-to-detector distance along the surface of said cone. As a proof of concept we applied these reconstruction techniques on measurements taken with the the Mobile Imager of Neutrons for Emergency Responders (MINER). Two Cf-252 sources measured at 50 and 60 cm from the center of the detector were resolved in their varying depth with average radial distance relative resolution of 26%. To demonstrate the technique’s potential with an optimized system we simulated the measurement in MCNPX-PoliMi assuming timing resolution of 200 ps (from 2 ns in the current system) and source interaction location resolution of 5 mm (from 3 cm). Furthermore, these simulated improvements in scatter camera performance resulted in radial distance relative resolution decreasing to an average of 11%.« less

  3. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  4. Conceptual design for an AIUC multi-purpose spectrograph camera using DMD technology

    NASA Astrophysics Data System (ADS)

    Rukdee, S.; Bauer, F.; Drass, H.; Vanzi, L.; Jordan, A.; Barrientos, F.

    2017-02-01

    Current and upcoming massive astronomical surveys are expected to discover a torrent of objects, which need groundbased follow-up observations to characterize their nature. For transient objects in particular, rapid early and efficient spectroscopic identification is needed. In particular, a small-field Integral Field Unit (IFU) would mitigate traditional slit losses and acquisition time. To this end, we present the design of a Digital Micromirror Device (DMD) multi-purpose spectrograph camera capable of running in several modes: traditional longslit, small-field patrol IFU, multi-object and full-field IFU mode via Hadamard spectra reconstruction. AIUC Optical multi-purpose CAMera (AIUCOCAM) is a low-resolution spectrograph camera of R 1,600 covering the spectral range of 0.45-0.85 μm. We employ a VPH grating as a disperser, which is removable to allow an imaging mode. This spectrograph is envisioned for use on a 1-2 m class telescope in Chile to take advantage of good site conditions. We present design decisions and challenges for a costeffective robotized spectrograph. The resulting instrument is remarkably versatile, capable of addressing a wide range of scientific topics.

  5. Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo

    NASA Astrophysics Data System (ADS)

    Daily, David; Kiser, Jillian; McQueen, Sarah

    2016-11-01

    Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.

  6. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  7. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  8. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  9. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    NASA Astrophysics Data System (ADS)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  10. Procedure Enabling Simulation and In-Depth Analysis of Optical Effects in Camera-Based Time-Of Sensors

    NASA Astrophysics Data System (ADS)

    Baumgart, M.; Druml, N.; Consani, M.

    2018-05-01

    This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  11. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  12. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  13. Amalgamation of East Eurasia Since Late Paleozoic: Constraints from the Apparent Polar Wander Paths of the Major China Blocks

    NASA Astrophysics Data System (ADS)

    Wu, L.; Kravchinsky, V. A.; Potter, D. K.

    2014-12-01

    It has been a longstanding challenge in the last few decades to quantitatively reconstruct the paleogeographic evolution of East Eurasia because of its great tectonic complexities. As the core region, the major China cratons including North China Block, South China Block and Tarim Block hold the key clues for the understanding of the amalgamation history, tectonic activities and biological affinity among the component blocks and terranes in East Eurasia. Compared with the major Gondwana and Laurentia plates, however, the apparent polar wander paths of China are not well constrained due to the outdated paleomagnetic database and relatively loose pole selection process. With the recruitment of the new high-fidelity poles published in the last decade, the rejection of the low quality data and the strict implementation of Voo's grading scheme, we build an updated paleomagnetic database for the three blocks from which three types of apparent polar wander paths (APWP) are computed. Version 1 running mean paths are constructed during the pole selection and compared with those from the previous publications. Version 2 running mean and spline paths with different sliding time windows are computed from the thoroughly examined poles to find the optimal paths with the steady trend, reasonable speed for the polar drift and plate rotation. The spline paths are recommended for the plate reconstructions, however, considering the poor data coverage during certain periods. Our new China APWPs, together with the latest European reference path, the geological, geochronological and biological evidence from the studied Asian plates allow us to reevaluate the paleogeographic and tectonic history of East Eurasia.

  14. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  15. Microtomography with photon counting detectors: improving the quality of tomographic reconstruction by voxel-space oversampling

    NASA Astrophysics Data System (ADS)

    Dudak, J.; Zemlicka, J.; Karch, J.; Hermanova, Z.; Kvacek, J.; Krejci, F.

    2017-01-01

    Photon counting detectors Timepix are known for their unique properties enabling X-ray imaging with extremely high contrast-to-noise ratio. Their applicability has been recently further improved since a dedicated technique for assembling large area Timepix detector arrays was introduced. Despite the fact that the sensitive area of Timepix detectors has been significantly increased, the pixel pitch is kept unchanged (55 microns). This value is much larger compared to widely used and popular X-ray imaging cameras utilizing scintillation crystals and CCD-based read-out. On the other hand, photon counting detectors provide steeper point-spread function. Therefore, with given effective pixel size of an acquired radiography, Timepix detectors provide higher spatial resolution than X-ray cameras with scintillation-based devices unless the image is affected by penumbral blur. In this paper we take an advance of steep PSF of photon counting detectors and test the possibility to improve the quality of computed tomography reconstruction using finer sampling of reconstructed voxel space. The achieved results are presented in comparison with data acquired under the same conditions using a commercially available state-of-the-art CCD X-ray camera.

  16. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  17. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  18. Reconstructing the 1935 Columbia River Gorge: A Topographic and Orthophoto Experiment

    NASA Astrophysics Data System (ADS)

    Fonstad, M. A.; Major, J. H.; O'Connor, J. E.; Dietrich, J. T.

    2017-12-01

    The last decade has seen a revolution in the mapping of rivers and near-river environments. Much of this has been associated with a new type of photogrammetry: structure from motion (SfM) and multi-view stereo techniques. Through SfM, 3D surfaces are reconstructed from nonstructured image groups with poorly calibrated cameras whose locations need not be known. Modern SfM imaging is greatly improved by careful flight planning, well-planned ground control or high-precision direct georeferencing, and well-understood camera optics. The ease of SfM, however, begs the question: how well does it work on archival photos taken without the foreknowledge of SfM techniques? In 1935, the Army Corps of Engineers took over 800 vertical aerial photos for a 160-km-long stretch of the Columbia River Gorge and adjacent areas in Oregon and Washington. These photos pre-date completion of three hydroelectric dams and reservoirs in this reach, and thus provide rich information on the historic pre-dam riverine, geologic, and cultural environments. These photos have little to no metadata associated with them, such as camera calibration reports, so traditional photogrammetry techniques are exceeding difficult to apply. Instead, we apply SfM to these archival photos, and test the resulting digital elevation model (DEM) against lidar data for features inferred to be unchanged in the past 80 years. Few, if any, of the quality controls recommended for SfM are available for these 1935 photos; they are scanned paper positives with little overlap taken with an unknown optical system in high altitude flight paths. Nevertheless, in almost all areas, the SfM analysis produced a high quality orthophoto of the gorge with low horizontal errors - most better than a few meters. The DEM created looks highly realistic, and in many areas has a vertical error of a few meters. However, the vertical errors are spatially inconsistent, with some wildly large, likely because of the many poorly constrained links in

  19. Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury

    NASA Astrophysics Data System (ADS)

    Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele

    2016-07-01

    The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.

  20. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera

    NASA Astrophysics Data System (ADS)

    Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  1. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.

    PubMed

    Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  2. Continuous-wave terahertz digital holography by use of a pyroelectric array camera.

    PubMed

    Ding, Sheng-Hui; Li, Qi; Li, Yun-Da; Wang, Qi

    2011-06-01

    Terahertz (THz) digital holography is realized based on a 2.52 THz far-IR gas laser and a commercial 124 × 124 pyroelectric array camera. Off-axis THz holograms are obtained by recording interference patterns between light passing through the sample and the reference wave. A numerical reconstruction process is performed to obtain the field distribution at the object surface. Different targets were imaged to test the system's imaging capability. Compared with THz focal plane images, the image quality of the reconstructed images are improved a lot. The results show that the system's imaging resolution can reach at least 0.4 mm. The system also has the potential for real-time imaging application. This study confirms that digital holography is a promising technique for real-time, high-resolution THz imaging, which has extensive application prospects. © 2011 Optical Society of America

  3. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    NASA Astrophysics Data System (ADS)

    Wutich, A.; White, A. C.; White, D. D.; Larson, K. L.; Brewis, A.; Roberts, C.

    2014-01-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  4. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    NASA Astrophysics Data System (ADS)

    Wutich, A.; White, A. C.; Roberts, C. M.; White, D. D.; Larson, K. L.; Brewis, A.

    2013-06-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences based on development status and, to a lesser extent, water scarcity. People in less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in more developed sites. Thematically, people in less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community based solutions, while people in more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in water-rich sites. Thematically, people in water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  5. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  6. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  7. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  8. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  9. PSF reconstruction for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming

    2018-02-01

    Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.

  10. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    PubMed

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  11. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  12. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  13. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  14. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  15. Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard; Neal, Douglas

    2011-11-01

    A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.

  16. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  17. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  18. An Extended Paleozoic Apparent Polar Wander Path for Baltica: new Permo-Carboniferous Paleopoles From the Donbas Region (Ukraine)

    NASA Astrophysics Data System (ADS)

    Hamers, M. F.; Meijers, M. J.; van Hinsbergen, D. J.; van der Meer, D. G.; Langereis, C. G.; Stephenson, R. A.

    2007-12-01

    An improved Paleozoic apparent polar wander (APW) path for Baltica is presented here on the basis of six new paleopoles that have been determined from samples collected in the Donbas region in the Dniepr-Donets basin in south-eastern Ukraine. Constructing APW paths allows improving paleogeographic reconstructions that reach further back in time than 200 Ma, where the use of oceanic isochrons and hotspot track has limited applicability. The absence of penetrative regional deformation and the subparallel trending bedding attitudes across the basin suggest that our sites did not suffer from local rotations and their results are interpreted as representative for Baltica. The data presented here improve the paleogeographic reconstruction of Baltica within the collage of the supercontinent Pangea. The six new paleopoles cover a time span from earliest Carboniferous (~356 Ma) to early Permian (~295 Ma). In our reconstruction, Baltica was located at a constant latitude of ~5°N during a major part of the Carboniferous, while at ~310 Ma it started to move gradually northward, reaching a paleolatitude of ~13°N at 295 Ma. From ~355 Ma to 295 Ma Baltica experienced a net ~20° clockwise rotation. Our new data differ with the APW path from Torsvik et al. (submitted) in the time span from ~320-300 Ma, wherein they propose a northward movement from more southerly latitudes. From 300 Ma onwards, our path fits the reference path from Torsvik et al. A possible Permian remagnetization of our sites is not likely, considering the rotational differences in the various time spans, and rockmagnetic analyses that have been performed. We also discuss the usage of the TK03 model (Tauxe and Kent (2004), Geoph. Mon. 145, pp 101-116) that allows for the correction of inclination error caused by compaction during burial, which is insignificant for most sites. This suggest that the NRM has been acquired after compaction.

  19. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  20. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  1. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  2. Miniaturized camera system for an endoscopic capsule for examination of the colonic mucosa

    NASA Astrophysics Data System (ADS)

    Wippermann, Frank; Müller, Martin; Wäny, Martin; Voltz, Stephan

    2014-09-01

    Todaýs standard procedure for the examination of the colon uses a digital endoscope located at the tip of a tube encasing wires for camera read out, fibers for illumination, and mechanical structures for steering and navigation. On the other hand, there are swallowable capsules incorporating a miniaturized camera which are more cost effective, disposable, and less unpleasant for the patient during examination but cannot be navigated along the path through the colon. We report on the development of a miniaturized endoscopic camera as part of a completely wireless capsule which can be safely and accurately navigated and controlled from the outside using an electromagnet. The endoscope is based on a global shutter CMOS-imager with 640x640 pixels and a pixel size of 3.6μm featuring through silicon vias. Hence, the required electronic connectivity is done at its back side using a ball grid array enabling smallest lateral dimensions. The layout of the f/5-objective with 100° diagonal field of view aims for low production cost and employs polymeric lenses produced by injection molding. Due to the need of at least one-time autoclaving, high temperature resistant polymers were selected. Optical and mechanical design considerations are given along with experimental data obtained from realized demonstrators.

  3. Development of proton CT imaging system using plastic scintillator and CCD camera

    NASA Astrophysics Data System (ADS)

    Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru

    2016-06-01

    A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.

  4. Teaching quantum physics by the sum over paths approach and GeoGebra simulations

    NASA Astrophysics Data System (ADS)

    Malgieri, M.; Onorato, P.; De Ambrosis, A.

    2014-09-01

    We present a research-based teaching sequence in introductory quantum physics using the Feynman sum over paths approach. Our reconstruction avoids the historical pathway, and starts by reconsidering optics from the standpoint of the quantum nature of light, analysing both traditional and modern experiments. The core of our educational path lies in the treatment of conceptual and epistemological themes, peculiar of quantum theory, based on evidence from quantum optics, such as the single photon Mach-Zehnder and Zhou-Wang-Mandel experiments. The sequence is supported by a collection of interactive simulations, realized in the open source GeoGebra environment, which we used to assist students in learning the basics of the method, and help them explore the proposed experimental situations as modeled in the sum over paths perspective. We tested our approach in the context of a post-graduate training course for pre-service physics teachers; according to the data we collected, student teachers displayed a greatly improved understanding of conceptual issues, and acquired significant abilities in using the sum over path method for problem solving.

  5. Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR.

    PubMed

    Langbehn, Eike; Lubos, Paul; Bruder, Gerd; Steinicke, Frank

    2017-04-01

    Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.

  6. Recent SO2 camera and OP-FTIR field measurements in Mexico and Guatemala

    NASA Astrophysics Data System (ADS)

    La Spina, Alessandro; Salerno, Giuseppe; Burton, Michael

    2013-04-01

    Between 22 and 30 November 2012 a field campaign was carried out at Mexico and Guatemala with the objectives of state the volcanic gas composition and flux fingerprints of Popocatepetl, Santiaguito, Fuego and Pacaya by exploiting simultaneously UV-camera and FTIR measurements. Gases were measured remotely using instruments sensitive to ultraviolet and infrared radiation (UV spectrometer, SO2-camera and OP-FTIR). Data collection depended on the requirements of the methodology, weather condition and eruptive stage of the volcanoes. OP-FTIR measurements were carried out using the MIDAC interferometer with 0.5 cm-1 resolution. Spectra were collected in solar occultation mode in which the Sun acts as an infrared source and the volcanic plume is interposed between the Sun and the spectrometer. At Santiaguito spectra were also collected in passive mode using the lava flow as a radiation source. The SO2-camera used for this study was a dual camera system consisting of two QS Imaging 640s cameras. Each of the two cameras was outfitted with two quartz 25mm lens, coupled with two band-pass filters centred at 310nm and at 330nm. The imaging system was managed by a custom-made software developed in LabView. The UV-camera system was coupled with a USB2000+ spectrometer connected to a QP1000-2-SR 1000 micron optical fiber with a 74-UV collimating lens. For calibration of plume imagery, images of five quartz cells containing known concentration path-lengths of SO2 were taken at the end of each sampling. Between 22 and 23 November 2012 UV-camera and FTIR observations were carried out at Popocatepetl. During the time of our observation, the volcano was characterised by pulsing degassing from the summit crater forming a whitish plume that dispersed rapidly in the atmosphere according to wind direction and speed. Data were collected from the Observatorio Atmosférico Altzomoni (Universidad Nacional Autónoma de México) at 4000 metre a.s.l. and at a distance of ~12 km from the volcano

  7. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  8. Progress towards a semiconductor Compton camera for prompt gamma imaging during proton beam therapy for range and dose verification

    NASA Astrophysics Data System (ADS)

    Gutierrez, A.; Baker, C.; Boston, H.; Chung, S.; Judson, D. S.; Kacperek, A.; Le Crom, B.; Moss, R.; Royle, G.; Speller, R.; Boston, A. J.

    2018-01-01

    The main objective of this work is to test a new semiconductor Compton camera for prompt gamma imaging. Our device is composed of three active layers: a Si(Li) detector as a scatterer and two high purity Germanium detectors as absorbers of high-energy gamma rays. We performed Monte Carlo simulations using the Geant4 toolkit to characterise the expected gamma field during proton beam therapy and have made experimental measurements of the gamma spectrum with a 60 MeV passive scattering beam irradiating a phantom. In this proceeding, we describe the status of the Compton camera and present the first preliminary measurements with radioactive sources and their corresponding reconstructed images.

  9. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  10. The assessment of postural control and the influence of a secondary task in people with anterior cruciate ligament reconstructed knees using a Nintendo Wii Balance Board.

    PubMed

    Howells, Brooke E; Clark, Ross A; Ardern, Clare L; Bryant, Adam L; Feller, Julian A; Whitehead, Timothy S; Webster, Kate E

    2013-09-01

    Postural control impairments may persist following anterior cruciate ligament (ACL) reconstruction. The effect of a secondary task on postural control has, however, not been determined. The purpose of this case-control study was to compare postural control in patients following ACL reconstruction with healthy individuals with and without a secondary task. 45 patients (30 men and 15 women) participated at least 6 months following primary ACL reconstruction surgery. Participants were individually matched by age, gender and sports activity to healthy controls. Postural control was measured using a Nintendo Wii Balance Board and customised software during static single-leg stance and with the addition of a secondary task. The secondary task required participants to match the movement of an oscillating marker by adducting and abducting their arm. Centre of pressure (CoP) path length in both medial-lateral and anterior-posterior directions, and CoP total path length. When compared with the control group, the anterior-posterior path length significantly increased in the ACL reconstruction patients' operated (12.3%, p=0.02) and non-operated limbs (12.8%, p=0.02) for the single-task condition, and the non-operated limb (11.5%, p=0.006) for the secondary task condition. The addition of a secondary task significantly increased CoP path lengths in all measures (p<0.001), although the magnitude of the increase was similar in both the ACL reconstruction and control groups. ACL reconstruction patients showed a reduced ability in both limbs to control the movement of the body in the anterior-posterior direction. The secondary task affected postural control by comparable amounts in patients after ACL reconstruction and healthy controls. Devices for the objective measurement of postural control, such as the one used in this study, may help clinicians to more accurately identify patients with deficits who may benefit from targeted neuromuscular training programs.

  11. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  12. Analyzing octopus movements using three-dimensional reconstruction.

    PubMed

    Yekutieli, Yoram; Mitelman, Rea; Hochner, Binyamin; Flash, Tamar

    2007-09-01

    Octopus arms, as well as other muscular hydrostats, are characterized by a very large number of degrees of freedom and a rich motion repertoire. Over the years, several attempts have been made to elucidate the interplay between the biomechanics of these organs and their control systems. Recent developments in electrophysiological recordings from both the arms and brains of behaving octopuses mark significant progress in this direction. The next stage is relating these recordings to the octopus arm movements, which requires an accurate and reliable method of movement description and analysis. Here we describe a semiautomatic computerized system for 3D reconstruction of an octopus arm during motion. It consists of two digital video cameras and a PC computer running custom-made software. The system overcomes the difficulty of extracting the motion of smooth, nonrigid objects in poor viewing conditions. Some of the trouble is explained by the problem of light refraction in recording underwater motion. Here we use both experiments and simulations to analyze the refraction problem and show that accurate reconstruction is possible. We have used this system successfully to reconstruct different types of octopus arm movements, such as reaching and bend initiation movements. Our system is noninvasive and does not require attaching any artificial markers to the octopus arm. It may therefore be of more general use in reconstructing other nonrigid, elongated objects in motion.

  13. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  14. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  15. Caveat emptor: limitations of the automated reconstruction of metabolic pathways in Plasmodium.

    PubMed

    Ginsburg, Hagai

    2009-01-01

    The functional reconstruction of metabolic pathways from an annotated genome is a tedious and demanding enterprise. Automation of this endeavor using bioinformatics algorithms could cope with the ever-increasing number of sequenced genomes and accelerate the process. Here, the manual reconstruction of metabolic pathways in the functional genomic database of Plasmodium falciparum--Malaria Parasite Metabolic Pathways--is described and compared with pathways generated automatically as they appear in PlasmoCyc, metaSHARK and the Kyoto Encyclopedia for Genes and Genomes. A critical evaluation of this comparison discloses that the automatic reconstruction of pathways generates manifold paths that need an expert manual verification to accept some and reject most others based on manually curated gene annotation.

  16. Biologically inspired EM image alignment and neural reconstruction.

    PubMed

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  17. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  18. Reconstructing plate motion paths where plate tectonics doesn't strictly apply

    NASA Astrophysics Data System (ADS)

    Handy, M. R.; Ustaszewski, K.

    2012-04-01

    The classical approach to reconstructing plate motion invokes the assumption that plates are rigid and therefore that their motions can be described as Eulerian rotations on a spherical Earth. This essentially two-dimensional, map view of plate motion is generally valid for large-scale systems, but is not practicable for small-scale tectonic systems in which plates, or significant parts thereof, deform on time scales approaching the duration of their motion. Such "unplate-like" (non-rigid) behaviour is common in systems with a weak lithosphere, for example, in Mediterranean-type settings where (micro-)plates undergo distributed deformation several tens to hundreds of km away from their boundaries. The motion vector of such anomalous plates can be quantified by combining and comparing information from two independent sources: (1) Balanced cross sections that are arrayed across deformed zones (orogens, basins) and provide estimates of crustal shortening and/or extension. Plate motion is then derived by retrodeforming the balanced sections in a stepwise fashion from external to internal parts of mountain belts, then applying these estimates as successive retrotranslations of points on stable parts of the upper plate with respect to a chosen reference frame on the lower plate. This approach is contingent on using structural markers with tight age constraints, for example, depth-sensitive metamorphic mineral parageneses and syn-orogenic sediments with known paleogeographic provenance; (2) Geophysical images of 3D subcrustal structure, especially of the MOHO and the lithospheric mantle in the vicinity of the deformed zones. In the latter case, travel-time seismic tomography of velocity anomalies can be used to identify subducted lithospheric slabs that extend downwards from the zones of crustal shortening to the mantle transitional zone and beyond. Synthesizing information from these two sources yields plate motion paths whose validity can be tested by the degree of

  19. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  20. Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane

    NASA Astrophysics Data System (ADS)

    Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.

    2014-06-01

    In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.

  1. 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.

    PubMed

    Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang

    2017-07-06

    We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

  2. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  3. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  4. Hubble Space Telescope, Faint Object Camera

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  5. Image reconstruction of muon tomographic data using a density-based clustering method

    NASA Astrophysics Data System (ADS)

    Perry, Kimberly B.

    Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.

  6. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  7. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  8. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  9. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  10. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  11. Radial polar histogram: obstacle avoidance and path planning for robotic cognition and motion control

    NASA Astrophysics Data System (ADS)

    Wang, Po-Jen; Keyawa, Nicholas R.; Euler, Craig

    2012-01-01

    In order to achieve highly accurate motion control and path planning for a mobile robot, an obstacle avoidance algorithm that provided a desired instantaneous turning radius and velocity was generated. This type of obstacle avoidance algorithm, which has been implemented in California State University Northridge's Intelligent Ground Vehicle (IGV), is known as Radial Polar Histogram (RPH). The RPH algorithm utilizes raw data in the form of a polar histogram that is read from a Laser Range Finder (LRF) and a camera. A desired open block is determined from the raw data utilizing a navigational heading and an elliptical approximation. The left and right most radii are determined from the calculated edges of the open block and provide the range of possible radial paths the IGV can travel through. In addition, the calculated obstacle edge positions allow the IGV to recognize complex obstacle arrangements and to slow down accordingly. A radial path optimization function calculates the best radial path between the left and right most radii and is sent to motion control for speed determination. Overall, the RPH algorithm allows the IGV to autonomously travel at average speeds of 3mph while avoiding all obstacles, with a processing time of approximately 10ms.

  12. Occlusions in Camera Networks and Vision: The Bridge between Topological Recovery and Metric Reconstruction

    DTIC Science & Technology

    2009-05-18

    serves as a didactic tool to understand the information required for the approach to coordinate free tracking and navigation problems. Observe that the...layout (left), and in the CN -Complex (right). These paths can be compared by using the algebraic topological tools covered in chapter 2. . . . 34 3.9...right). mathematical tools necessary to make our discussion formal; chapter 3 will present the construction of a simplicial representation called

  13. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.

  14. Ted Irving and the Arc of APW Paths

    NASA Astrophysics Data System (ADS)

    Kent, D. V.

    2014-12-01

    Ted Irving's last two published papers neatly encapsulate his seminal contributions to the delineation of ever-important apparent polar wander (APW) paths. His final (210th) paper [Creer & Irving, 2012 Earth Sciences History] describes in detail how Ken Creer and he when still graduate students at Cambridge started to generate and assemble paleomagnetic data for the first APW path, for then only the UK; the paper was published 60 years ago and happened to be Ted's first [Creer, Irving & Runcorn, 1954 JGE]. Only 10 years later, there was already a lengthy reference list of paleomagnetic results available from most continents that had been compiled in pole lists he published in GJRAS from 1960 to 1965 and included in an appendix in his landmark book "Paleomagnetism" [Irving, 1964 Wiley] in support of wide ranging discussions of continental drift and related topics in chapters like 'Paleolatitudes and paleomeridians.' A subsequent innovation was calculating running means of poles indexed to a numerical geologic time scale [Irving, 1977 Nature], which with independent tectonic reconstructions as already for Gondwana allowed constructions of more detailed composite APW paths. His 1977 paper also coined Pangea B for an earlier albeit contentious configuration for the supercontinent that refuses to go away. Gliding over much work on APW tracks and hairpins in the Precambrian, we come to Ted's penultimate (209th) paper [Kent & Irving, 2010 JGR] in which individual poles from short-lived large igneous provinces were grouped and most sedimentary poles, many rather venerable, excluded as likely to be biased by variable degrees of inclination error. The leaner composite APW path helped to resurrect the Baja BC scenario of Cordilleran terrane motions virtually stopped in the 1980s by APW path techniques that relied on a few key but alas often badly skewed poles. The new composite APW path also revealed several major features, such as a huge polar shift of 30° in 15 Myr in the

  15. Easily Accessible Camera Mount

    NASA Technical Reports Server (NTRS)

    Chalson, H. E.

    1986-01-01

    Modified mount enables fast alinement of movie cameras in explosionproof housings. Screw on side and readily reached through side door of housing. Mount includes right-angle drive mechanism containing two miter gears that turn threaded shaft. Shaft drives movable dovetail clamping jaw that engages fixed dovetail plate on camera. Mechanism alines camera in housing and secures it. Reduces installation time by 80 percent.

  16. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  17. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  18. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  19. Polarimetry at millimeter wavelengths with the NIKA camera: calibration and performance

    NASA Astrophysics Data System (ADS)

    Ritacco, A.; Ponthieu, N.; Catalano, A.; Adam, R.; Ade, P.; André, P.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Coiffard, G.; Comis, B.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Leclercq, S.; Macías-Pérez, J. F.; Mauskopf, P.; Maury, A.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Rebolo-Iglesias, M.; Revéret, V.; Rodriguez, L.; Romero, C.; Ruppin, F.; Savini, G.; Schuster, K.; Sievers, A.; Thum, C.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2017-03-01

    Magnetic fields, which play a major role in a large number of astrophysical processes can be traced via observations of dust polarization. In particular, Planck low-resolution observations of dust polarization have demonstrated that Galactic filamentary structures, where star formation takes place, are associated to well organized magnetic fields. A better understanding of this process requires detailed observations of galactic dust polarization on scales of 0.01 to 0.1 pc. Such high-resolution polarization observations can be carried out at the IRAM 30 m telescope using the recently installed NIKA2 camera, which features two frequency bands at 260 and 150 GHz (respectively 1.15 and 2.05 mm), the 260 GHz band being polarization sensitive. NIKA2 so far in commissioning phase, has its focal plane filled with 3300 detectors to cover a Field of View (FoV) of 6.5 arcmin diameter. The NIKA camera, which consisted of two arrays of 132 and 224 Lumped Element Kinetic Inductance Detectors (LEKIDs) and a FWHM (Full-Width-Half-Maximum) of 12 and 18.2 arcsec at 1.15 and 2.05 mm respectively, has been operated at the IRAM 30 m telescope from 2012 to 2015 as a test-bench for NIKA2. NIKA was equipped of a room temperature polarization system (a half wave plate (HWP) and a grid polarizer facing the NIKA cryostat window). The fast and continuous rotation of the HWP permits the quasi simultaneous reconstruction of the three Stokes parameters, I, Q, and U at 150 and 260 GHz. This paper presents the first polarization measurements with KIDs and reports the polarization performance of the NIKA camera and the pertinence of the choice of the polarization setup in the perspective of NIKA2. We describe the polarized data reduction pipeline, specifically developed for this project and how the continuous rotation of the HWP permits to shift the polarized signal far from any low frequency noise. We also present the dedicated algorithm developed to correct systematic leakage effects. We report

  20. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  1. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  2. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  3. Hanford Environmental Dose Reconstruction Project. Monthly report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  4. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  5. Robotic Online Path Planning on Point Cloud.

    PubMed

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  6. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  7. Observations of the Perseids 2013 using SPOSH cameras

    NASA Astrophysics Data System (ADS)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  8. ANGIOCARE: an automated system for fast three-dimensional coronary reconstruction by integrating angiographic and intracoronary ultrasound data.

    PubMed

    Bourantas, Christos V; Kalatzis, Fanis G; Papafaklis, Michail I; Fotiadis, Dimitrios I; Tweddel, Ann C; Kourtis, Iraklis C; Katsouras, Christos S; Michalis, Lampros K

    2008-08-01

    The development of an automated, user-friendly system (ANGIOCARE), for rapid three-dimensional (3D) coronary reconstruction, integrating angiographic and, intracoronary ultrasound (ICUS) data. Biplane angiographic and ICUS sequence images are imported into the system where a prevalidated method is used for coronary reconstruction. This incorporates extraction of the catheter path from two end-diastolic X-ray images and detection of regions of interest (lumen, outer vessel wall) in the ICUS sequence by an automated border detection algorithm. The detected borders are placed perpendicular to the catheter path and established algorithms used to estimate their absolute orientation. The resulting 3D object is imported into an advanced visualization module with which the operator can interact, examine plaque distribution (depicted as a color coded map) and assess plaque burden by virtual endoscopy. Data from 19 patients (27 vessels) undergoing biplane angiography and ICUS were examined. The reconstructed vessels were 21.3-80.2 mm long. The mean difference was 0.9 +/- 2.9% between the plaque volumes measured using linear 3D ICUS analysis and the volumes, estimated by taking into account the curvature of the vessel. The time required to reconstruct a luminal narrowing of 25 mm was approximately 10 min. The ANGIOCARE system provides rapid coronary reconstruction allowing the operator accurately to estimate the length of the lesion and determine plaque distribution and volume. (c) 2008 Wiley-Liss, Inc.

  9. SU-E-T-574: Fessiblity of Using the Calypso System for HDR Interstitial Catheter Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J S; Ma, C

    2014-06-01

    Purpose: It is always a challenge to reconstruct the interstitial catheter for high dose rate (HDR) brachytherapy on patient CT or MR images. This work aims to investigate the feasibility of using the Calypso system (Varian Medical, CA) for HDR catheter reconstruction utilizing its accuracy on tracking the electromagnetic transponder location. Methods: Experiment was done with a phantom that has a HDR interstitial catheter embedded inside. CT scan with a slice thickness of 1.25 mm was taken for this phantom with two Calypso beacon transponders in the catheter. The two transponders were connected with a wire. The Calypso system wasmore » used to record the beacon transponders’ location in real time when they were gently pulled out with the wire. The initial locations of the beacon transponders were used for registration with the CT image and the detected transponder locations were used for the catheter path reconstruction. The reconstructed catheter path was validated on the CT image. Results: The HDR interstitial catheter was successfully reconstructed based on the transponders’ coordinates recorded by the Calypso system in real time when the transponders were pulled in the catheter. After registration with the CT image, the shape and location of the reconstructed catheter are evaluated against the CT image and the result shows an accuracy of 2 mm anywhere in the Calypso detectable region which is within a 10 cm X 10 cm X 10 cm cubic box for the current system. Conclusion: It is feasible to use the Calypso system for HDR interstitial catheter reconstruction. The obstacle for its clinical usage is the size of the beacon transponder whose diameter is bigger than most of the interstitial catheters used in clinic. Developing smaller transponders and supporting software and hardware for this application is necessary before it can be adopted for clinical use.« less

  10. PathNER: a tool for systematic identification of biological pathway mentions in the literature

    PubMed Central

    2013-01-01

    Background Biological pathways are central to many biomedical studies and are frequently discussed in the literature. Several curated databases have been established to collate the knowledge of molecular processes constituting pathways. Yet, there has been little focus on enabling systematic detection of pathway mentions in the literature. Results We developed a tool, named PathNER (Pathway Named Entity Recognition), for the systematic identification of pathway mentions in the literature. PathNER is based on soft dictionary matching and rules, with the dictionary generated from public pathway databases. The rules utilise general pathway-specific keywords, syntactic information and gene/protein mentions. Detection results from both components are merged. On a gold-standard corpus, PathNER achieved an F1-score of 84%. To illustrate its potential, we applied PathNER on a collection of articles related to Alzheimer's disease to identify associated pathways, highlighting cases that can complement an existing manually curated knowledgebase. Conclusions In contrast to existing text-mining efforts that target the automatic reconstruction of pathway details from molecular interactions mentioned in the literature, PathNER focuses on identifying specific named pathway mentions. These mentions can be used to support large-scale curation and pathway-related systems biology applications, as demonstrated in the example of Alzheimer's disease. PathNER is implemented in Java and made freely available online at http://sourceforge.net/projects/pathner/. PMID:24555844

  11. Focal plane wavefront sensor achromatization: The multireference self-coherent camera

    NASA Astrophysics Data System (ADS)

    Delorme, J. R.; Galicher, R.; Baudoz, P.; Rousset, G.; Mazoyer, J.; Dupuis, O.

    2016-04-01

    Context. High contrast imaging and spectroscopy provide unique constraints for exoplanet formation models as well as for planetary atmosphere models. But this can be challenging because of the planet-to-star small angular separation (<1 arcsec) and high flux ratio (>105). Recently, optimized instruments like VLT/SPHERE and Gemini/GPI were installed on 8m-class telescopes. These will probe young gazeous exoplanets at large separations (≳1 au) but, because of uncalibrated phase and amplitude aberrations that induce speckles in the coronagraphic images, they are not able to detect older and fainter planets. Aims: There are always aberrations that are slowly evolving in time. They create quasi-static speckles that cannot be calibrated a posteriori with sufficient accuracy. An active correction of these speckles is thus needed to reach very high contrast levels (>106-107). This requires a focal plane wavefront sensor. Our team proposed a self coherent camera, the performance of which was demonstrated in the laboratory. As for all focal plane wavefront sensors, these are sensitive to chromatism and we propose an upgrade that mitigates the chromatism effects. Methods: First, we recall the principle of the self-coherent camera and we explain its limitations in polychromatic light. Then, we present and numerically study two upgrades to mitigate chromatism effects: the optical path difference method and the multireference self-coherent camera. Finally, we present laboratory tests of the latter solution. Results: We demonstrate in the laboratory that the multireference self-coherent camera can be used as a focal plane wavefront sensor in polychromatic light using an 80 nm bandwidth at 640 nm (bandwidth of 12.5%). We reach a performance that is close to the chromatic limitations of our bench: 1σ contrast of 4.5 × 10-8 between 5 and 17 λ0/D. Conclusions: The performance of the MRSCC is promising for future high-contrast imaging instruments that aim to actively minimize the

  12. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  13. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  14. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    , and more detailed data on a larger number of clinical and experimental PET scans will be necessary for definitive evaluation. Nevertheless, the HIDAC positron camera may be used for clinical PET imaging in well-defined patient cases, particularly in situations where both high spatial resolution is desired in the reconstructed image of the examined pathological condition and at the same time "static" PET imaging may be adequate, as is the case in thyroid-, ENT- and liver tomographic imaging using the HIDAC positron camera.

  15. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    -dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  16. Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.

    PubMed

    Brauers, Johannes; Aach, Til

    2011-02-01

    High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.

  17. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    PubMed

    Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul

    2015-01-01

    Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  18. A photogrammetry-based system for 3D surface reconstruction of prosthetics and orthotics.

    PubMed

    Li, Guang-kun; Gao, Fan; Wang, Zhi-gang

    2011-01-01

    The objective of this study is to develop an innovative close range digital photogrammetry (CRDP) system using the commercial digital SLR cameras to measure and reconstruct the 3D surface of prosthetics and orthotics. This paper describes the instrumentation, techniques and preliminary results of the proposed system. The technique works by taking pictures of the object from multiple view angles. The series of pictures were post-processed via feature point extraction, point match and 3D surface reconstruction. In comparison with the traditional method such as laser scanning, the major advantages of our instrument include the lower cost, compact and easy-to-use hardware, satisfactory measurement accuracy, and significantly less measurement time. Besides its potential applications in prosthetics and orthotics surface measurement, the simple setup and its ease of use will make it suitable for various 3D surface reconstructions.

  19. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  20. Born iterative reconstruction using perturbed-phase field estimates

    PubMed Central

    Astheimer, Jeffrey P.; Waag, Robert C.

    2008-01-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873

  1. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  2. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  3. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  4. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  5. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  6. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  7. Trigger and Reconstruction Algorithms for the Japanese Experiment Module- Extreme Universe Space Observatory (JEM-EUSO)

    NASA Technical Reports Server (NTRS)

    Adams, J. H., Jr.; Andreev, Valeri; Christl, M. J.; Cline, David B.; Crawford, Hank; Judd, E. G.; Pennypacker, Carl; Watts, J. W.

    2007-01-01

    The JEM-EUSO collaboration intends to study high energy cosmic ray showers using a large downward looking telescope mounted on the Japanese Experiment Module of the International Space Station. The telescope focal plane is instrumented with approx.300k pixels operating as a digital camera, taking snapshots at approx. 1MHz rate. We report an investigation of the trigger and reconstruction efficiency of various algorithms based on time and spatial analysis of the pixel images. Our goal is to develop trigger and reconstruction algorithms that will allow the instrument to detect energies low enough to connect smoothly to ground-based observations.

  8. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  9. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  10. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  11. pathChirp: Efficient Available Bandwidth Estimation for Network Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottrell, Les

    2003-04-30

    This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less

  12. Metric for evaluation of filter efficiency in spectral cameras.

    PubMed

    Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani

    2016-11-10

    Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.

  13. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  14. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  15. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    PubMed

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. PET with the HIDAC camera?

    NASA Astrophysics Data System (ADS)

    Townsend, D. W.

    1988-06-01

    In 1982 the first prototype high density avalanche chamber (HIDAC) positron camera became operational in the Division of Nuclear Medicine of Geneva University Hospital. The camera consisted of dual 20 cm × 20 cm HIDAC detectors mounted on a rotating gantry. In 1984, these detectors were replaced by 30 cm × 30 cm detectors with improved performance and reliability. Since then, the larger detectors have undergone clinical evaluation. This article discusses certain aspects of the evaluation program and the conclusions that can be drawn from the results. The potential of the HIDAC camera for quantitative positron emission tomography (PET) is critically examined, and its performance compared with a state-of-the-art, commercial ring camera. Guidelines for the design of a future HIDAC camera are suggested.

  17. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is

  18. Uncooled radiometric camera performance

    NASA Astrophysics Data System (ADS)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  19. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  20. Thin and thick cloud top height retrieval algorithm with the Infrared Camera and LIDAR of the JEM-EUSO Space Mission

    NASA Astrophysics Data System (ADS)

    Sáez-Cano, G.; Morales de los Ríos, J. A.; del Peral, L.; Neronov, A.; Wada, S.; Rodríguez Frías, M. D.

    2015-03-01

    The origin of cosmic rays have remained a mistery for more than a century. JEM-EUSO is a pioneer space-based telescope that will be located at the International Space Station (ISS) and its aim is to detect Ultra High Energy Cosmic Rays (UHECR) and Extremely High Energy Cosmic Rays (EHECR) by observing the atmosphere. Unlike ground-based telescopes, JEM-EUSO will observe from upwards, and therefore, for a properly UHECR reconstruction under cloudy conditions, a key element of JEM-EUSO is an Atmospheric Monitoring System (AMS). This AMS consists of a space qualified bi-spectral Infrared Camera, that will provide the cloud coverage and cloud top height in the JEM-EUSO Field of View (FoV) and a LIDAR, that will measure the atmospheric optical depth in the direction it has been shot. In this paper we will explain the effects of clouds for the determination of the UHECR arrival direction. Moreover, since the cloud top height retrieval is crucial to analyze the UHECR and EHECR events under cloudy conditions, the retrieval algorithm that fulfills the technical requierements of the Infrared Camera of JEM-EUSO to reconstruct the cloud top height is presently reported.

  1. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  2. Reconstructing Global-scale Ionospheric Outflow With a Satellite Constellation

    NASA Astrophysics Data System (ADS)

    Liemohn, M. W.; Welling, D. T.; Jahn, J. M.; Valek, P. W.; Elliott, H. A.; Ilie, R.; Khazanov, G. V.; Glocer, A.; Ganushkina, N. Y.; Zou, S.

    2017-12-01

    The question of how many satellites it would take to accurately map the spatial distribution of ionospheric outflow is addressed in this study. Given an outflow spatial map, this image is then reconstructed from a limited number virtual satellite pass extractions from the original values. An assessment is conducted of the goodness of fit as a function of number of satellites in the reconstruction, placement of the satellite trajectories relative to the polar cap and auroral oval, season and universal time (i.e., dipole tilt relative to the Sun), geomagnetic activity level, and interpolation technique. It is found that the accuracy of the reconstructions increases sharply from one to a few satellites, but then improves only marginally with additional spacecraft beyond 4. Increased dwell time of the satellite trajectories in the auroral zone improves the reconstruction, therefore a high-but-not-exactly-polar orbit is most effective for this task. Local time coverage is also an important factor, shifting the auroral zone to different locations relative to the virtual satellite orbit paths. The expansion and contraction of the polar cap and auroral zone with geomagnetic activity influences the coverage of the key outflow regions, with different optimal orbit configurations for each level of activity. Finally, it is found that reconstructing each magnetic latitude band individually produces a better fit to the original image than 2-D image reconstruction method (e.g., triangulation). A high-latitude, high-altitude constellation mission concept is presented that achieves acceptably accurate outflow reconstructions.

  3. Implications of path tolerance and path characteristics on critical vehicle manoeuvres

    NASA Astrophysics Data System (ADS)

    Lundahl, K.; Frisk, E.; Nielsen, L.

    2017-12-01

    Path planning and path following are core components in safe autonomous driving. Typically, a path planner provides a path with some tolerance on how tightly the path should be followed. Based on that, and other path characteristics, for example, sharpness of curves, a speed profile needs to be assigned so that the vehicle can stay within the given tolerance without going unnecessarily slow. Here, such trajectory planning is based on optimal control formulations where critical cases arise as on-the-limit solutions. The study focuses on heavy commercial vehicles, causing rollover to be of a major concern, due to the relatively high centre of gravity. Several results are obtained on required model complexity depending on path characteristics, for example, quantification of required path tolerance for a simple model to be sufficient, quantification of when yaw inertia needs to be considered in more detail, and how the curvature rate of change interplays with available friction. Overall, in situations where the vehicle is subject to a wide range of driving conditions, from good transport roads to more tricky avoidance manoeuvres, the requirements on the path following will vary. For this, the provided results form a basis for real-time path following.

  4. Localization and spectral isolation of special nuclear material using stochastic image reconstruction

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Clarke, S. D.; Pozzi, S. A.

    2017-01-01

    In this work we present a technique for isolating the gamma-ray and neutron energy spectra from multiple radioactive sources localized in an image. Image reconstruction algorithms for radiation scatter cameras typically focus on improving image quality. However, with scatter cameras being developed for non-proliferation applications, there is a need for not only source localization but also source identification. This work outlines a modified stochastic origin ensembles algorithm that provides localized spectra for all pixels in the image. We demonstrated the technique by performing three experiments with a dual-particle imager that measured various gamma-ray and neutron sources simultaneously. We showed that we could isolate the peaks from 22Na and 137Cs and that the energy resolution is maintained in the isolated spectra. To evaluate the spectral isolation of neutrons, a 252Cf source and a PuBe source were measured simultaneously and the reconstruction showed that the isolated PuBe spectrum had a higher average energy and a greater fraction of neutrons at higher energies than the 252Cf. Finally, spectrum isolation was used for an experiment with weapons grade plutonium, 252Cf, and AmBe. The resulting neutron and gamma-ray spectra showed the expected characteristics that could then be used to identify the sources.

  5. 3D terrain reconstruction using Chang’E-3 PCAM images

    NASA Astrophysics Data System (ADS)

    Chen, Wangli; Zeng, Xingguo; Zhang, Hongbo

    2017-10-01

    In order to improve understanding of the topography of Chang’E-3 landing site, 3D terrain models are reconstructed using PCMA images. PCAM (panoramic cameras) is a stereo camera system with a 27cm baseline on-board Yutu rover. It obtained panoramic images at four detection sites, and can achieve a resolution of 1.48mm/pixel at 10m. So the PCAM images reveal fine details of the detection region. In the method, SIFT is employed for feature description and feature matching. In addition to collinearity equations, the measure of baseline of the stereo system is also used in bundle adjustment to solve orientation parameters of all images. And then, pair-wise depth map computation is applied for dense surface reconstruction. Finally, DTM of the detection region is generated. The DTM covers an area with radius of about 20m, and centering at the location of the camera. In consequence of the design, each individual wheel of Yutu rover can leave three tracks on the surface of moon, and the width between the first and third track is 15cm, and these tracks are clear and distinguishable in images. So we chose the second detection site which is of the best ability of recognition of wheel tracks to evaluate the accuracy of the DTM. We measured the width of wheel tracks every 1.5m from the center of the detection region, and obtained 13 measures. It is noticed that the area where wheel tracks are ambiguous is avoided. Result shows that the mean value of wheel track width is 0.155m with a standard deviation of 0.007m. Generally, the closer to the center the more accurate the measure of wheel width is. This is due to the fact that the deformation of images aggravates with increase distance from the location of the camera, and this induces the decline of DTM quality in far areas. In our work, images of the four detection sites are adjusted independently, and this means that there is no tie point between different sites. So deviations between the locations of the same object measured

  6. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  7. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  8. Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST

    NASA Image and Video Library

    1993-12-06

    S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  9. Fisheye camera around view monitoring system

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen

    2018-04-01

    360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.

  10. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  11. Real-time imaging of methane gas leaks using a single-pixel camera.

    PubMed

    Gibson, Graham M; Sun, Baoqing; Edgar, Matthew P; Phillips, David B; Hempler, Nils; Maker, Gareth T; Malcolm, Graeme P A; Padgett, Miles J

    2017-02-20

    We demonstrate a camera which can image methane gas at video rates, using only a single-pixel detector and structured illumination. The light source is an infrared laser diode operating at 1.651μm tuned to an absorption line of methane gas. The light is structured using an addressable micromirror array to pattern the laser output with a sequence of Hadamard masks. The resulting backscattered light is recorded using a single-pixel InGaAs detector which provides a measure of the correlation between the projected patterns and the gas distribution in the scene. Knowledge of this correlation and the patterns allows an image to be reconstructed of the gas in the scene. For the application of locating gas leaks the frame rate of the camera is of primary importance, which in this case is inversely proportional to the square of the linear resolution. Here we demonstrate gas imaging at ~25 fps while using 256 mask patterns (corresponding to an image resolution of 16×16). To aid the task of locating the source of the gas emission, we overlay an upsampled and smoothed image of the low-resolution gas image onto a high-resolution color image of the scene, recorded using a standard CMOS camera. We demonstrate for an illumination of only 5mW across the field-of-view imaging of a methane gas leak of ~0.2 litres/minute from a distance of ~1 metre.

  12. Comparison of 3d Reconstruction Services and Terrestrial Laser Scanning for Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Rasztovits, S.; Dorninger, P.

    2013-07-01

    Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.

  13. High-precision real-time 3D shape measurement based on a quad-camera system

    NASA Astrophysics Data System (ADS)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  14. Range Finding with a Plenoptic Camera

    DTIC Science & Technology

    2014-03-27

    92 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Simulated Camera Analysis...Varying Lens Diameter . . . . . . . . . . . . . . . . 95 Simulated Camera Analysis: Varying Detector Size . . . . . . . . . . . . . . . . . 98 Simulated ...Matching Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 37 Simulated Camera Performance with SIFT

  15. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  16. 3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images

    NASA Astrophysics Data System (ADS)

    Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom

    Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.

  17. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  18. Synthetic-Aperture Coherent Imaging From A Circular Path

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1995-01-01

    Imaging algorithms based on exact point-target responses. Developed for use in reconstructing image of target from data gathered by radar, sonar, or other transmitting/receiving coherent-signal sensory apparatus following circular observation path around target. Potential applications include: Wide-beam synthetic-aperture radar (SAR) from aboard spacecraft in circular orbit around target planet; SAR from aboard airplane flying circular course at constant elevation around central ground point, toward which spotlight radar beam pointed; Ultrasonic reflection tomography in medical setting, using one transducer moving in circle around patient or else multiple transducers at fixed positions on circle around patient; and Sonar imaging of sea floor to high resolution, without need for large sensory apparatus.

  19. Implementation of a close range photogrammetric system for 3D reconstruction of a scoliotic torso

    NASA Astrophysics Data System (ADS)

    Detchev, Ivan Denislavov

    Scoliosis is a deformity of the human spine most commonly encountered with children. After being detected, periodic examinations via x-rays are traditionally used to measure its progression. However, due to the increased risk of cancer, a non-invasive and radiation-free scoliosis detection and progression monitoring methodology is needed. Quantifying the scoliotic deformity through the torso surface is a valid alternative, because of its high correlation with the internal spine curvature. This work proposes a low-cost multi-camera photogrammetric system for semi-automated 3D reconstruction of a torso surface with sub-millimetre level accuracy. The thesis describes the system design and calibration for optimal accuracy. It also covers the methodology behind the reconstruction and registration procedures. The experimental results include the complete reconstruction of a scoliotic torso mannequin. The final accuracy is evaluated through the goodness of fit between the reconstructed surface and a more accurate set of points measured by a coordinate measuring machine.

  20. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  1. Digital micromirror device-based common-path quantitative phase imaging.

    PubMed

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T C

    2017-04-01

    We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the "off" state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption.

  2. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  3. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  4. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  5. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  6. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen

    2008-04-01

    OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.

  8. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  9. Pulled Motzkin paths

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  10. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  11. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  12. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  13. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  14. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  15. THE DARK ENERGY CAMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  16. The Dark Energy Camera

    DOE PAGES

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  17. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  18. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  19. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  20. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  1. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  2. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  3. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  4. IMAX camera in payload bay

    NASA Image and Video Library

    1995-12-20

    STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.

  5. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  6. Path selection in the growth of rivers

    DOE PAGES

    Cohen, Yossi; Devauchelle, Olivier; Seybold, Hansjörg F.; ...

    2015-11-02

    River networks exhibit a complex ramified structure that has inspired decades of studies. But, an understanding of the propagation of a single stream remains elusive. In this paper, we invoke a criterion for path selection from fracture mechanics and apply it to the growth of streams in a diffusion field. We show that, as it cuts through the landscape, a stream maintains a symmetric groundwater flow around its tip. The local flow conditions therefore determine the growth of the drainage network. We use this principle to reconstruct the history of a network and to find a growth law associated withmore » it. Finally, our results show that the deterministic growth of a single channel based on its local environment can be used to characterize the structure of river networks.« less

  7. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  8. SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, S; Uesaka, M; Nishio, T

    2016-06-15

    Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value ofmore » the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.« less

  9. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  11. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  12. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  13. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  14. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  15. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  16. Zero-Slack, Noncritical Paths

    ERIC Educational Resources Information Center

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  17. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  18. Age estimation of the Deccan Traps from the North American apparent polar wander path

    NASA Technical Reports Server (NTRS)

    Stoddard, Paul R.; Jurdy, Donna M.

    1988-01-01

    It has recently been proposed that flood basalt events, such as the eruption of the Deccan Traps, have been responsible for mass extinctions. To test this hypothesis, accurate estimations of the ages and duration of these events are needed. In the case of the Deccan Traps, however, neither age nor duration of emplacement is well constrianed; measured ages range from 40 to more than 80 Myr, and estimates of duration range from less than 1 to 67 Myr. To make an independent age determination, paleomagnetic and sea-floor-spreading data are used, and the associated errors are estimated. The Deccan paleomagnetic pole is compared with the reference apparent polar wander path of North America by rotating the positions of the paleomagnetic pole for the Deccan Traps to the reference path for a range of assumed ages. Uncertainties in the apparent polar wander path, Deccan paleopole position, and errors resulting from the plate reconstruction are estimated. It is suggested that 83-70 Myr is the most likely time of extrusion of these volcanic rocks.

  19. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  20. [Use of indocyanine green angiography in oncological and reconstructive breast surgery].

    PubMed

    Struk, S; Honart, J-F; Qassemyar, Q; Leymarie, N; Sarfati, B; Alkhashnam, H; Mazouni, C; Rimareix, F; Kolb, F

    2018-02-01

    The Indocyanine green (ICG) is a soluble dye that is eliminated by the liver and excreted in bile. When illuminated by an near-infrared light, the ICG emits fluorescence in the near-infrared spectrum, which can be captured by a near-infrared camera-handled device. In case of intravenous injection, ICG may be used as a marker of skin perfusion. In case of interstitial injection, it may be useful for lymphatic network mapping. In oncological and reconstructive breast surgery, ICG is used for sentinel lymph node identification, to predict mastectomy skin flap necrosis, to assess the perfusion of free flaps in autologous reconstruction and for diagnosis and treatment of upper limb secondary lymphedema. Intraoperative indocyanine green fluorescence might also be used to guide the excision of nonpalpable breast cancer. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  1. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  2. Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.

    PubMed

    Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha

    2017-11-01

    We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  3. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  4. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  5. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  6. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  7. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    PubMed Central

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  8. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  9. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  10. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  11. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  12. Camera/Photometer Results

    NASA Astrophysics Data System (ADS)

    Clifton, K. S.; Owens, J. K.

    1983-04-01

    Efforts continue regarding the analysis of particulate contamination recorded by the Camera/Photometers on STS-2. These systems were constructed by Epsilon Laboratories, Inc. and consisted of two 16-mm photographic cameras, using Kodak Double X film, Type 7222, to make stereoscopic observations of contaminant particles and background. Each was housed within a pressurized canister and operated automatically throughout the mission, making simultaneous exposures on a continuous basis every 150 sec. The cameras were equipped with 18-mm f/0.9 lenses and subtended overlapping 20° fields-of-view. An integrating photometer was used to inhibit the exposure sequences during periods of excessive illumination and to terminate the exposures at preset light levels. During the exposures, a camera shutter operated in a chopping mode in order to isolate the movement of particles for velocity determinations. Calculations based on the preflight film calibration indicate that particles as small as 25 μm can be detected from ideal observing conditions. Current emphasis is placed on the digitization of the photographic data frames and the determination of particle distances, sizes, and velocities. It has been concluded that background bright-ness measurements cannot be established with any reliability on the STS-2 mission, due to the preponderance of Earth-directed attitudes and the incidence of light reflected from nearby surfaces.

  13. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  14. Dig Hazard Assessment Using a Stereo Pair of Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Trebi-Ollennu, Ashitey

    2012-01-01

    This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each

  15. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  16. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  17. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  18. Optical design of portable nonmydriatic fundus camera

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang

    2016-03-01

    Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.

  19. How the Learning Path and the Very Structure of a Multifloored Environment Influence Human Spatial Memory

    PubMed Central

    Dollé, Laurent; Droulez, Jacques; Bennequin, Daniel; Berthoz, Alain; Thibault, Guillaume

    2015-01-01

    Few studies have explored how humans memorize landmarks in complex multifloored buildings. They have observed that participants memorize an environment either by floors or by vertical columns, influenced by the learning path. However, the influence of the building’s actual structure is not yet known. In order to investigate this influence, we conducted an experiment using an object-in-place protocol in a cylindrical building to contrast with previous experiments which used rectilinear environments. Two groups of 15 participants were taken on a tour with a first person perspective through a virtual cylindrical three-floored building. They followed either a route discovering floors one at a time, or a route discovering columns (by simulated lifts across floors). They then underwent a series of trials, in which they viewed a camera movement reproducing either a segment of the learning path (familiar trials), or performing a shortcut relative to the learning trajectory (novel trials). We observed that regardless of the learning path, participants better memorized the building by floors, and only participants who had discovered the building by columns also memorized it by columns. This expands on previous results obtained in a rectilinear building, where the learning path favoured the memory of its horizontal and vertical layout. Taken together, these results suggest that both learning mode and an environment’s structure influence the spatial memory of complex multifloored buildings. PMID:26770288

  20. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time