Science.gov

Sample records for camera path reconstruction

  1. Nonholonomic catheter path reconstruction using electromagnetic tracking

    NASA Astrophysics Data System (ADS)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  2. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  3. Localization and trajectory reconstruction in surveillance cameras with nonoverlapping views.

    PubMed

    Pflugfelder, Roman; Bischof, Horst

    2010-04-01

    This paper proposes a method that localizes two surveillance cameras and simultaneously reconstructs object trajectories in 3D space. The method is an extension of the Direct Reference Plane method, which formulates the localization and the reconstruction as a system of linear equations that is globally solvable by Singular Value Decomposition. The method's assumptions are static synchronized cameras, smooth trajectories, known camera internal parameters, and the rotation between the cameras in a world coordinate system. The paper describes the method in the context of self-calibrating cameras, where the internal parameters and the rotation can be jointly obtained assuming a man-made scene with orthogonal structures. Experiments with synthetic and real--image data show that the method can recover the camera centers with an error less than half a meter even in the presence of a 4 meter gap between the fields of view. PMID:20224125

  4. Localization and trajectory reconstruction in surveillance cameras with nonoverlapping views.

    PubMed

    Pflugfelder, Roman; Bischof, Horst

    2010-04-01

    This paper proposes a method that localizes two surveillance cameras and simultaneously reconstructs object trajectories in 3D space. The method is an extension of the Direct Reference Plane method, which formulates the localization and the reconstruction as a system of linear equations that is globally solvable by Singular Value Decomposition. The method's assumptions are static synchronized cameras, smooth trajectories, known camera internal parameters, and the rotation between the cameras in a world coordinate system. The paper describes the method in the context of self-calibrating cameras, where the internal parameters and the rotation can be jointly obtained assuming a man-made scene with orthogonal structures. Experiments with synthetic and real--image data show that the method can recover the camera centers with an error less than half a meter even in the presence of a 4 meter gap between the fields of view.

  5. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  6. Iterative reconstruction of detector response of an Anger gamma camera.

    PubMed

    Morozov, A; Solovov, V; Alves, F; Domingos, V; Martins, R; Neves, F; Chepel, V

    2015-05-21

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations. PMID:25951792

  7. Iterative reconstruction of detector response of an Anger gamma camera

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Solovov, V.; Alves, F.; Domingos, V.; Martins, R.; Neves, F.; Chepel, V.

    2015-05-01

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations.

  8. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  9. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.

  10. Three-dimensional source reconstruction with a scanned pinhole camera.

    PubMed

    Marks, D L; Brady, D J

    1998-06-01

    We present a simple reconstruction algorithm for three-dimensional (3D) incoherent source distributions imaged by a laterally scanned pinhole camera. We consider digital sampling of multiple pinhole images for 3D reconstruction and implement an experimental demonstration with lateral resolution of 2x10(-3) rad and longitudinal resolution of approximately 0.14z(2) m , where z is the object-to-pinhole distance in meters.

  11. Analytical reconstruction formula for one-dimensional Compton camera

    SciTech Connect

    Basko, R.; Zeng, G.L.; Gullberg, G.T.

    1996-12-31

    The Compton camera has been proposed as an alternative to the Anger camera in SPECT. The advantage of the Compton camera is its high geometric efficiency due to electronic collimation. The Compton camera collects projections that are integrals over cone surfaces. Although some progress has been made toward image reconstruction from cone projections, at present no filtered backprojection algorithm exists. This paper investigates a simpler 2D version of the imaging problem. An analytical formula is developed for 2D reconstruction from data acquired by a 1D Compton camera that consists of two linear detectors, one behind the other. Coincidence photon detection allows the localization of the 2D source distribution to two lines in the shape of a {open_quotes}V{close_quotes} with the vertex on the front detector. A set of {open_quotes}V{close_quotes} projection data can be divided into subsets whose elements can be viewed as line-integrals of the original image added with its mirrored shear transformation. If the detector has infinite extent, reconstruction of the original image is possible using data from only one such subset. Computer simulations were performed to verify the newly developed algorithm.

  12. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  13. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  14. Superficial vessel reconstruction with a multiview camera system.

    PubMed

    Marreiros, Filipe M M; Rossitti, Sandro; Karlsson, Per M; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are [Formula: see text]. PMID:26759814

  15. Robust 3D reconstruction with an RGB-D camera.

    PubMed

    Wang, Kangkan; Zhang, Guofeng; Bao, Hujun

    2014-11-01

    We present a novel 3D reconstruction approach using a low-cost RGB-D camera such as Microsoft Kinect. Compared with previous methods, our scanning system can work well in challenging cases where there are large repeated textures and significant depth missing problems. For robust registration, we propose to utilize both visual and geometry features and combine SFM technique to enhance the robustness of feature matching and camera pose estimation. In addition, a novel prior-based multicandidates RANSAC is introduced to efficiently estimate the model parameters and significantly speed up the camera pose estimation under multiple correspondence candidates. Even when serious depth missing occurs, our method still can successfully register all frames together. Loop closure also can be robustly detected and handled to eliminate the drift problem. The missing geometry can be completed by combining multiview stereo and mesh deformation techniques. A variety of challenging examples demonstrate the effectiveness of the proposed approach.

  16. Plasma tomographic reconstruction from tangentially viewing camera with background subtraction

    SciTech Connect

    Odstrčil, M.; Mlynář, J.; Weinzettl, V.; Háček, P.; Verdoolaege, G.; Berta, M.

    2014-01-15

    Light reflections are one of the main and often underestimated issues of plasma emissivity reconstruction in visible light spectral range. Metallic and other specular components of tokamak generate systematic errors in the optical measurements that could lead to wrong interpretation of data. Our analysis is performed at data from the tokamak COMPASS. It is a D-shaped tokamak with specular metallic vessel and possibility of the H-mode plasma. Data from fast visible light camera were used for tomographic reconstruction with background reflections subtraction to study plasma boundary. In this article, we show that despite highly specular tokamak wall, it is possible to obtain a realistic reconstruction. The developed algorithm shows robust results despite of systematic errors in the optical measurements and calibration. The motivation is to obtain an independent estimate of the plasma boundary shape.

  17. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  18. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  19. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758

  20. Measurement of non-common path static aberrations in an interferometric camera by phase diversity

    NASA Astrophysics Data System (ADS)

    Yan, Zhaojun; Herbst, Thomas M.; Yang, Pengqian; Bizenberger, Peter; Zhang, Xianyu; Conrad, Albert R.; Bertram, Thomas; Kuerster, Martin; Rix, Hans-Walter; Li, Xinyang; Rao, Changhui

    2012-10-01

    LINC-NIRVANA (LN) is a near-infrared image-plane beam combiner with advanced, multi-conjugated adaptive optics for the Large Binocular Telescope. Non-common path aberrations (NCPAs) between the near-infrared science camera and the wave-front sensor (WFS) are unseen by the WFS and therefore are not corrected in closed loop. This would prevent LN from achieving its ultimate performance. We use a modified phase diversity technique to measure the internal optical static aberrations and hence the NCPAs. Phase diversity is a methodology for estimating wave-front aberrations by solving an unconstrained optimization problem from multiple images whose pupil phases differ from one another by a known amount. We conduct computer simulations of the reconstruction of aberrations of an optical system with the phase diversity method. In the reconstruction, we fit the wave-front to Zernike polynomials to reduce the number of variables. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm is very well suited to phase diversity (PD) due to its good performance in solving large scale optimization problems. The main constraint for the implementation of PD for LN is that we cannot add extra components to the internal interferometric camera imaging system to obtain infocus and defocus images. In this paper, we introduce a new method, namely shifting the focal plane source, not the detector, to overcome this constraint. Some experiments were done to test and verify this method and the results are presented and discussed. The study shows that the method is very flexible and the paper gives practical guidelines for the application of phase diversity methods to characterize adaptive optics systems.

  1. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  2. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    PubMed

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique. PMID:27504515

  3. Filtered backprojection proton CT reconstruction along most likely paths

    SciTech Connect

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  4. Stereo Reconstruction of Atmospheric Cloud Surfaces from Fish-Eye Camera Images

    NASA Astrophysics Data System (ADS)

    Katai-Urban, G.; Otte, V.; Kees, N.; Megyesi, Z.; Bixel, P. S.

    2016-06-01

    In this article a method for reconstructing atmospheric cloud surfaces using a stereo camera system is presented. The proposed camera system utilizes fish-eye lenses in a flexible wide baseline camera setup. The entire workflow from the camera calibration to the creation of the 3D point set is discussed, but the focus is mainly on cloud segmentation and on the image processing steps of stereo reconstruction. Speed requirements, geometric limitations, and possible extensions of the presented method are also covered. After evaluating the proposed method on artificial cloud images, this paper concludes with results and discussion of possible applications for such systems.

  5. Influence of camera calibration conditions on the accuracy of 3D reconstruction.

    PubMed

    Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis

    2016-02-01

    For stereoscopic systems designed for metrology applications, the accuracy of camera calibration dictates the precision of the 3D reconstruction. In this paper, the impact of various calibration conditions on the reconstruction quality is studied using a virtual camera calibration technique and the design file of a commercially available lens. This technique enables the study of the statistical behavior of the reconstruction task in selected calibration conditions. The data show that the mean reprojection error should not always be used to evaluate the performance of the calibration process and that a low quality of feature detection does not always lead to a high mean reconstruction error.

  6. Effects of uncertainty in camera geometry on three-dimensional catheter reconstruction from biplane fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Dietz, Anthony; Kynor, David B.; Friets, Eric; Triedman, John; Hammer, Peter

    2002-05-01

    Clinical procedures that rely on biplane x-ray images for three-dimensional (3-D) information may be enhanced by three-dimensional reconstructions. However, the accuracy of reconstructed images is dependent on the uncertainty associated with the parameters that define the geometry of the camera system. In this paper, we use a numerical simulation to examine the effect of these uncertainties and to determine the limits required for adequate three-dimensional reconstruction. We then test our conclusions with images of a calibration phantom recorded using a clinical system. A set of reconstruction routines, developed for a cardiac mapping system, were used in this evaluation. The routines include procedures for correcting image distortion and for automatically locating catheter electrodes. Test images were created using a numerical simulation of a biplane x-ray projection system. The reconstruction routines were then applied using accurate and perturbed camera geometries and error maps were produced. Our results indicate that useful catheter reconstructions are possible with reasonable bounds on the uncertainty of camera geometry provided the locations of the camera isocenters are accurate. The results of this study provide a guide for the specification of camera geometry display systems and for researchers evaluating possible methodologies for determining camera geometry.

  7. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  8. 3D-guided CT reconstruction using time-of-flight camera

    NASA Astrophysics Data System (ADS)

    Ismail, Mahmoud; Taguchi, Katsuyuki; Xu, Jingyan; Tsui, Benjamin M. W.; Boctor, Emad M.

    2011-03-01

    We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.

  9. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    PubMed

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  10. Towards direct reconstruction from a gamma camera based on compton scattering

    SciTech Connect

    Cree, M.J.; Bones, P.J. . Dept. of Electrical and Electronic Engineering)

    1994-06-01

    The Compton scattering camera (sometimes called the electronically collimated camera) has been shown by others to have the potential to better the photon counting statistics and the energy resolution of the Anger camera for imaging in SPECT. By using coincident detection of Compton scattering events on two detecting planes, a photon can be localized to having been sourced on the surface of a cone. New algorithms are needed to achieve fully three-dimensional reconstruction of the source distribution from such a camera. If a complete set of cone-surface projections are collected over an infinitely extending plane, it is shown that the reconstruction problem is not only analytically solvable, but also overspecified in the absence of measurement uncertainties. Two approaches to direct reconstruction are proposed, both based on the photons which travel perpendicularly between the detector planes. Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization). The modifications likely to be required in the presence of realistic measurement uncertainties are discussed.

  11. Interactive alignment and image reconstruction for wafer-level multi-aperture camera systems

    NASA Astrophysics Data System (ADS)

    Oberdörster, Alexander; Brückner, Andreas; Lensch, Hendrik P. A.

    2014-09-01

    Assembly of miniaturized high-resolution cameras is typically carried out by active alignment. The sensor image is constantly monitored while the lens stack is adjusted. When sharpness is acceptable in all regions of the image, the lens position over the sensor is fixed. For multi-aperture cameras, this approach is not sufficient. During prototyping, it is beneficial to see the complete reconstructed image, assembled from all optical channels. However, typical reconstruction algorithms are high-quality offline methods that require calibration. As the geometric setup of the camera repeatedly changes during assembly, this would require frequent re-calibration. We present a real-time algorithm for an interactive preview of the reconstructed image during camera alignment. With this algorithm, systematic alignment errors can be tracked and corrected during assembly. Known imperfections of optical components can also be included in the reconstruction. Finally, the algorithm easily maps to very simple GPU operations, making it ideal for applications in mobile devices where power consumption is critical.

  12. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  13. A fast 3D reconstruction system with a low-cost camera accessory.

    PubMed

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  14. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  15. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  16. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  17. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  18. First use of mini gamma cameras for intra-operative robotic SPECT reconstruction.

    PubMed

    Matthies, Philipp; Sharma, Kanishka; Okur, Ash; Gardiazabal, José; Vogel, Jakob; Lasserl, Tobias; Navab, Nassir

    2013-01-01

    Different types of nuclear imaging systems have been used in the past, starting with pre-operative gantry-based SPECT systems and gamma cameras for 2D imaging of radioactive distributions. The main applications are concentrated on diagnostic imaging, since traditional SPECT systems and gamma cameras are bulky and heavy. With the development of compact gamma cameras with good resolution and high sensitivity, it is now possible to use them without a fixed imaging gantry. Mounting the camera onto a robot arm solves the weight issue, while also providing a highly repeatable and reliable acquisition platform. In this work we introduce a novel robotic setup performing scans with a mini gamma camera, along with the required calibration steps, and show the first SPECT reconstructions. The results are extremely promising, both in terms of image quality as well as reproducibility. In our experiments, the novel setup outperformed a commercial fhSPECT system, reaching accuracies comparable to state-of-the-art SPECT systems.

  19. Semantically Documenting Virtual Reconstruction: Building a Path to Knowledge Provenance

    NASA Astrophysics Data System (ADS)

    Bruseker, G.; Guillem, A.; Carboni, N.

    2015-08-01

    The outcomes of virtual reconstructions of archaeological monuments are not just images for aesthetic consumption but rather present a scholarly argument and decision making process. They are based on complex chains of reasoning grounded in primary and secondary evidence that enable a historically probable whole to be reconstructed from the partial remains left in the archaeological record. This paper will explore the possibilities for documenting and storing in an information system the phases of the reasoning, decision and procedures that a modeler, with the support of an archaeologist, uses during the virtual reconstruction process and how they can be linked to the reconstruction output. The goal is to present a documentation model such that the foundations of evidence for the reconstructed elements, and the reasoning around them, are made not only explicit and interrogable but also can be updated, extended and reused by other researchers in future work. Using as a case-study the reconstruction of a kitchen in a Roman domus in Grand, we will examine the necessary documentation requirements, and the capacity to express it using semantic technologies. For our study we adopt the CIDOC-CRM ontological model, and its extensions CRMinf, CRMBa and CRMgeo as a starting point for modelling the arguments and relations.

  20. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    SciTech Connect

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-12-31

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with {sup 18}F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera.

  1. Quantitative underwater 3D motion analysis using submerged video cameras: accuracy analysis and trajectory reconstruction.

    PubMed

    Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L

    2013-01-01

    In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.

  2. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  3. Experimental study of the influence of refraction on underwater three-dimensional reconstruction using the SVP camera model.

    PubMed

    Kang, Lai; Wu, Lingda; Yang, Yee-Hong

    2012-11-01

    In an underwater imaging system, a perspective camera is often placed outside a tank or in waterproof housing with a flat glass window. The refraction of light occurs when a light ray passes through the water-glass and air-glass interface, rendering the conventional multiple view geometry based on the single viewpoint (SVP) camera model invalid. While most recent underwater vision studies mainly focus on the challenging topic of calibrating such systems, no previous work has systematically studied the influence of refraction on underwater three-dimensional (3D) reconstruction. This paper demonstrates the possibility of using the SVP camera model in underwater 3D reconstruction through theoretical analysis of refractive distortion and simulations. Then, the performance of the SVP camera model in multiview underwater 3D reconstruction is quantitatively evaluated. The experimental results reveal a rather surprising and useful yet overlooked fact that the SVP camera model with radial distortion correction and focal length adjustment can compensate for refraction and achieve high accuracy in multiview underwater 3D reconstruction (within 0.7 mm for an object of dimension 200 mm) compared with the results of land-based systems. Such an observation justifies the use of the SVP camera model in underwater application for reconstructing reliable 3D scenes. Our results can be used to guide the selection of system parameters in the design of an underwater 3D imaging setup.

  4. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    PubMed

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  5. Linear stratified approach using full geometric constraints for 3D scene reconstruction and camera calibration.

    PubMed

    Kim, Jae-Hean; Koo, Bon-Ki

    2013-02-25

    This paper presents a new linear framework to obtain 3D scene reconstruction and camera calibration simultaneously from uncalibrated images using scene geometry. Our strategy uses the constraints of parallelism, coplanarity, colinearity, and orthogonality. These constraints can be obtained in general man-made scenes frequently. This approach can give more stable results with fewer images and allow us to gain the results with only linear operations. In this paper, it is shown that all the geometric constraints used in the previous works performed independently up to now can be implemented easily in the proposed linear method. The study on the situations that cannot be dealt with by the previous approaches is also presented and it is shown that the proposed method being able to handle the cases is more flexible in use. The proposed method uses a stratified approach, in which affine reconstruction is performed first and then metric reconstruction. In this procedure, the additional constraints newly extracted in this paper have an important role for affine reconstruction in practical situations.

  6. Fast 3D-EM reconstruction using Planograms for stationary planar positron emission mammography camera.

    PubMed

    Motta, A; Guerra, A Del; Belcari, N; Moehrs, S; Panetta, D; Righi, S; Valentini, D

    2005-12-01

    At the University of Pisa we are building a PEM prototype, the YAP-PEM camera, consisting of two opposite 6 x 6 x 3 cm3 detector heads of 30 x 30 YAP:Ce finger crystals, 2 x 2 x 30 mm3 each. The camera will be equipped with breast compressors. The acquisition will be stationary. Compared with a whole body PET scanner, a planar Positron Emission Mammography (PEM) camera allows a better, easier and more flexible positioning around the breast in the vicinity of the tumor: this increases the sensitivity and solid angle coverage, and reduces cost. To avoid software rejection of data during the reconstruction, resulting in a reduced sensitivity, we adopted a 3D-EM reconstruction which uses all of the collected Lines Of Response (LORs). This skips the PSF distortion given by data rebinning procedures and/or Fourier methods. The traditional 3D-EM reconstruction requires several times the computation of the LOR-voxel correlation matrix, or probability matrix {p(ij)}; therefore is highly time-consuming. We use the sparse and symmetry properties of the matrix {p(ij)} to perform fast 3D-EM reconstruction. Geometrically, a 3D grid of cubic voxels (FOV) is crossed by several divergent 3D line sets (LORs). The symmetries occur when tracing different LORs produces the same p(ij) value. Parallel LORs of different sets cross the FOV in the same way, and the repetition of p(ij) values depends on the ratio between the tube and voxel sizes. By optimizing this ratio, the occurrence of symmetries is increased. We identify a nucleus of symmetry of LORs: for each set of symmetrical LORs we choose just one LOR to be put in the nucleus, while the others lie outside. All of the possible p(ij) values are obtainable by tracking only the LORs of this nucleus. The coordinates of the voxels of all of the other LORs are given by means of simple translation rules. Before making the reconstruction, we trace the LORs of the nucleus to find the intersecting voxels, whose p(ij) values are computed and

  7. Moving beyond flat earth: dense 3D scene reconstruction from a single FL-LWIR camera

    NASA Astrophysics Data System (ADS)

    Stone, K.; Keller, J. M.; Anderson, D. T.

    2013-06-01

    In previous work an automatic detection system for locating buried explosive hazards in forward-looking longwave infrared (FL-LWIR) and forward-looking ground penetrating radar (FL-GPR) data was presented. This system consists of an ensemble of trainable size-contrast filters prescreener coupled with a secondary classification step which extracts cell-structured image space features, such as local binary patterns (LBP), histogram of oriented gradients (HOG), and edge histogram descriptors (EHD), from multiple looks and classifies the resulting feature vectors using a support vector machine. Previously, this system performed image space to UTM coordinate mapping under a flat earth assumption. This limited its applicability to flat terrain and short standoff distances. This paper demonstrates a technique for dense 3D scene reconstruction from a single vehicle mounted FL-LWIR camera. This technique utilizes multiple views and standard stereo vision algorithms such as polar rectification and optimal correction. Results for the detection algorithm using this 3D scene reconstruction approach on data from recent collections at an arid US Army test site are presented. These results are compared to those obtained under the flat earth assumption, with special focus on rougher terrain and longer standoff distance than in previous experiments. The most recent collection also allowed comparison between uncooled and cooled FL-LWIR cameras for buried explosive hazard detection.

  8. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    PubMed

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.

  9. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  10. A Trajectory and Orientation Reconstruction Method for Moving Objects Based on a Moving Monocular Camera

    PubMed Central

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-01-01

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053

  11. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    PubMed

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  12. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  13. The Effect of Tissue Inhomogeneities on the Accuracy of Proton Path Reconstruction for Proton Computed Tomography

    SciTech Connect

    Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly

    2009-03-10

    Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.

  14. A method for automatic 3D reconstruction based on multiple views from a free-mobile camera

    NASA Astrophysics Data System (ADS)

    Yu, Qingbing; Zhang, Zhijiang

    2004-09-01

    Automatic 3D-reconstruction from an image sequence of an object is described. The construction is based on multiple views from a free-mobile camera and the object is placed on a novel calibration pattern consisting of two concentric circles connected by radial line segments. Compared to other methods of 3D-reconstruction, the approach reduces the restriction of the measurement environment and increases the flexibility of the user. In the first step, the images of each view are calibrated individually to obtain camera information. The calibration pattern is separated from the input image with the erosion-dilation algorithm and the calibration points can be extracted from the pattern image accurately after estimations of two ellipses and lines. Tsai"s two-stage technique is used in calibration process. In the second step, the 3D reconstruction of real object can be subdivided into two parts: the shape reconstruction and texture mapping. With the principle of "shape from silhouettes (SFS)", a bounding cone is constructed from one image using the calibration information and silhouette. The intersection of all bounding cones defines an approximate geometric representation. The experimental results with real object are performed, the reconstruction error <1%, which validate this method"s high efficiency and feasibility.

  15. [Reconstruction of possible paths of the origin and morphological evolution of bacteriophages].

    PubMed

    Letarov, A V

    1998-11-01

    The problem of the origin and evolution of viruses and, in particular, the origin and evolution of bacteriophages is of considerable interest. However, so far, this problem has not been solved with quantitative methods of molecular systematics. In the present study, an attempt to reconstruct the possible paths of appearance and evolution of bacteriophages based on their structural features and morphogenesis, as well as general characteristics of their life cycles and genome organization, was carried out. A scheme describing phylogeny of the main bacteriophage groups and evolution of their life cycles is suggested. Existence of two independently evaluating types of morphogenesis ("budding outward" and "budding inward") is postulated. PMID:10096023

  16. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  17. Three-dimensional reconstruction of helicopter blade-tip vortices using a multi-camera BOS system

    NASA Astrophysics Data System (ADS)

    Bauknecht, André; Ewers, Benjamin; Wolf, Christian; Leopold, Friedrich; Yin, Jianping; Raffel, Markus

    2015-01-01

    Noise and structural vibrations in rotorcraft are strongly influenced by interactions between blade-tip vortices and the structural components of a helicopter. As a result, knowing the three-dimensional location of vortices is highly desirable, especially for the case of full-scale helicopters under realistic flight conditions. In the current study, we present results from a flight test with a full-scale BO 105 in an open-pit mine. A background-oriented schlieren measurement system consisting of ten cameras with a natural background was used to visualize the vortices of the helicopter during maneuvering flight. Vortex filaments could be visualized and extracted up to a vortex age of 360°. Vortex instability effects were found for several flight conditions. For the camera calibration, an iterative approach using points on the helicopter fuselage was applied. Point correspondence between vortex curves in the evaluated images was established by means of epipolar geometry. A three-dimensional reconstruction of the main part of the vortex system was carried out for the first time using stereophotogrammetry. The reconstructed vortex system had good qualitative agreement with the result of an unsteady free-wake panel method simulation. A quantitative evaluation of the 3D vortex system was carried out, demonstrating the potential of the multi-camera background-oriented schlieren measurement technique for the analysis of blade-vortex interaction effects on rotorcraft.

  18. DIC image reconstruction using an energy minimization framework to visualize optical path length distribution

    PubMed Central

    Koos, Krisztian; Molnár, József; Kelemen, Lóránd; Tamás, Gábor; Horvath, Peter

    2016-01-01

    Label-free microscopy techniques have numerous advantages such as low phototoxicity, simple setup and no need for fluorophores or other contrast materials. Despite their advantages, most label-free techniques cannot visualize specific cellular compartments or the location of proteins and the image formation limits quantitative evaluation. Differential interference contrast (DIC) is a qualitative microscopy technique that shows the optical path length differences within a specimen. We propose a variational framework for DIC image reconstruction. The proposed method largely outperforms state-of-the-art methods on synthetic, artificial and real tests and turns DIC microscopy into an automated high-content imaging tool. Image sets and the source code of the examined algorithms are made publicly available. PMID:27453091

  19. DIC image reconstruction using an energy minimization framework to visualize optical path length distribution.

    PubMed

    Koos, Krisztian; Molnár, József; Kelemen, Lóránd; Tamás, Gábor; Horvath, Peter

    2016-01-01

    Label-free microscopy techniques have numerous advantages such as low phototoxicity, simple setup and no need for fluorophores or other contrast materials. Despite their advantages, most label-free techniques cannot visualize specific cellular compartments or the location of proteins and the image formation limits quantitative evaluation. Differential interference contrast (DIC) is a qualitative microscopy technique that shows the optical path length differences within a specimen. We propose a variational framework for DIC image reconstruction. The proposed method largely outperforms state-of-the-art methods on synthetic, artificial and real tests and turns DIC microscopy into an automated high-content imaging tool. Image sets and the source code of the examined algorithms are made publicly available. PMID:27453091

  20. An irrotation correction on pressure gradient and orthogonal-path integration for PIV-based pressure reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Zhongyi; Gao, Qi; Wang, Chengyue; Wei, Runjie; Wang, Jinjun

    2016-06-01

    Particle image velocimetry (PIV)-based pressure reconstruction has become a popular technique in experimental fluid mechanics. Noise or errors in raw velocity field would significantly affect the quality of pressure reconstruction in PIV measurement. To reduce experimental errors in pressure gradient and improve the precision of reconstructed pressure field, a minimal 2-norm criteria-based new technique called irrotation correction (IC) with orthogonal decomposition is developed. The pressure reconstruction is therefore composed of three steps: calculation of pressure gradient from time-resolved velocity fields of PIV, an irrotation correction on the pressure gradient field, and finally a simple orthogonal-path integration (OPI) for pressure. Systematic assessments of IC algorithm are performed on synthetic solid-body rotation flow, direct numerical simulations of a channel flow and an isotropic turbulent flow. The results show that IC is a robust algorithm which can significantly improve the accuracy of pressure reconstruction primarily in the low wave number domain. After irrotation correction, noisy pressure gradient field ideally becomes an irrotational field on which the pressure integration is independent of integrating paths. Therefore, an OPI algorithm is proposed to perform the pressure integration in an efficient way with very few integration paths. This makes the new technique to be a doable method on three-dimensional pressure reconstruction with acceptable computational cost.

  1. Finding the shortest path with PesCa: a tool for network reconstruction

    PubMed Central

    Scardoni, Giovanni; Tosadori, Gabriele; Pratap, Sakshi; Spoto, Fausto; Laudanna, Carlo

    2016-01-01

    Network analysis is of growing interest in several fields ranging from economics to biology. Several methods have been developed to investigate different properties of physical networks abstracted as graphs, including quantification of specific topological properties, contextual data enrichment, simulation of pathway dynamics and visual representation. In this context, the PesCa app for the Cytoscape network analysis environment is specifically designed to help researchers infer and manipulate networks based on the shortest path principle. PesCa offers different algorithms allowing network reconstruction and analysis starting from a list of genes, proteins and in general a set of interconnected nodes. The app is useful in the early stage of network analysis, i.e. to create networks or generate clusters based on shortest path computation, but can also help further investigations and, in general, it is suitable for every situation requiring the connection of a set of nodes that apparently do not share links, such as isolated nodes in sub-networks. Overall, the plugin enhances the ability of discovering interesting and not obvious relations between high dimensional sets of interacting objects. PMID:27781081

  2. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  3. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  4. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  5. Filtered back-projection reconstruction for attenuation proton CT along most likely paths

    NASA Astrophysics Data System (ADS)

    Quiñones, C. T.; Létang, J. M.; Rit, S.

    2016-05-01

    This work investigates the attenuation of a proton beam to reconstruct the map of the linear attenuation coefficient of a material which is mainly caused by the inelastic interactions of protons with matter. Attenuation proton computed tomography (pCT) suffers from a poor spatial resolution due to multiple Coulomb scattering (MCS) of protons in matter, similarly to the conventional energy-loss pCT. We therefore adapted a recent filtered back-projection algorithm along the most likely path (MLP) of protons for energy-loss pCT (Rit et al 2013) to attenuation pCT assuming a pCT scanner that can track the position and the direction of protons before and after the scanned object. Monte Carlo simulations of pCT acquisitions of density and spatial resolution phantoms were performed to characterize the new algorithm using Geant4 (via Gate). Attenuation pCT assumes an energy-independent inelastic cross-section, and the impact of the energy dependence of the inelastic cross-section below 100 MeV showed a capping artifact when the residual energy was below 100 MeV behind the object. The statistical limitation has been determined analytically and it was found that the noise in attenuation pCT images is 411 times and 278 times higher than the noise in energy-loss pCT images for the same imaging dose at 200 MeV and 300 MeV, respectively. Comparison of the spatial resolution of attenuation pCT images with a conventional straight-line path binning showed that incorporating the MLP estimates during reconstruction improves the spatial resolution of attenuation pCT. Moreover, regardless of the significant noise in attenuation pCT images, the spatial resolution of attenuation pCT was better than that of conventional energy-loss pCT in some studied situations thanks to the interplay of MCS and attenuation known as the West-Sherwood effect.

  6. Reconstructing the landing trajectory of the CE-3 lunar probe by using images from the landing camera

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Jun; Yan, Wei; Li, Chun-Lai; Tan, Xu; Ren, Xin; Mu, Ling-Li

    2014-12-01

    An accurate determination of the landing trajectory of Chang'e-3 (CE-3) is significant for verifying orbital control strategy, optimizing orbital planning, accurately determining the landing site of CE-3 and analyzing the geological background of the landing site. Due to complexities involved in the landing process, there are some differences between the planned trajectory and the actual trajectory of CE-3. The landing camera on CE-3 recorded a sequence of the landing process with a frequency of 10 frames per second. These images recorded by the landing camera and high-resolution images of the lunar surface are utilized to calculate the position of the probe, so as to reconstruct its precise trajectory. This paper proposes using the method of trajectory reconstruction by Single Image Space Resection to make a detailed study of the hovering stage at a height of 100 m above the lunar surface. Analysis of the data shows that the closer CE-3 came to the lunar surface, the higher the spatial resolution of images that were acquired became, and the more accurately the horizontal and vertical position of CE-3 could be determined. The horizontal and vertical accuracies were 7.09 m and 4.27 m respectively during the hovering stage at a height of 100.02 m. The reconstructed trajectory can reflect the change in CE-3's position during the powered descent process. A slight movement in CE-3 during the hovering stage is also clearly demonstrated. These results will provide a basis for analysis of orbit control strategy, and it will be conducive to adjustment and optimization of orbit control strategy in follow-up missions.

  7. Increasing signal-to-noise ratio of reconstructed digital holograms by using light spatial noise portrait of camera's photosensor

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.

    2015-01-01

    Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.

  8. Temporal resolved x-ray penumbral imaging technique using heuristic image reconstruction procedure and wide dynamic range x-ray streak camera

    SciTech Connect

    Fujioka, Shinsuke; Shiraga, Hiroyuki; Azechi, Hiroshi; Nishimura, Hiroaki; Izawa, Yasukazu; Nozaki, Shinya; Chen, Yen-wei

    2004-10-01

    Temporal resolved x-ray penumbral imaging has been developed using an image reconstruction procedure of the heuristic method and a wide dynamic range x-ray streak camera (XSC). Reconstruction procedure of the penumbral imaging is inherently intolerant to noise, a reconstructed image is strongly distorted by artifacts caused by noise in a penumbral image. Statistical fluctuation in the number of detected photon is the dominant source of noise in an x-ray image, however acceptable brightness of an image is limited by dynamic range of an XSC. The wide dynamic range XSC was used to obtain penumbral images bright enough to be reconstructed. Additionally, the heuristic method was introduced in the penumbral image reconstruction procedure. Distortion of reconstructed images is sufficiently suppressed by these improvements. Density profiles of laser driven brominated plastic and tin plasma were measured with this technique.

  9. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  10. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    PubMed

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.

  11. Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction.

    PubMed

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2016-08-01

    We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT. PMID:26978665

  12. Efficient Numerical Reconstruction of Protein Folding Kinetics with Partial Path Sampling and Pathlike Variables

    NASA Astrophysics Data System (ADS)

    Juraszek, J.; Saladino, G.; van Erp, T. S.; Gervasio, F. L.

    2013-03-01

    Numerically predicting rate constants of protein folding and other relevant biological events is still a significant challenge. We show that the combination of partial path transition interface sampling with the optimal interfaces and free-energy profiles provided by path collective variables makes the rate calculation for practical biological applications feasible and efficient. This methodology can reproduce the experimental rate constant of Trp-cage miniprotein folding with the same level of accuracy as transition path sampling at a fraction of the cost.

  13. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  14. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  15. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition.

    PubMed

    McEvoy, John F; Hall, Graham P; McDonald, Paul G

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.

  16. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition

    PubMed Central

    Hall, Graham P.; McDonald, Paul G.

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys. PMID:27020132

  17. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition.

    PubMed

    McEvoy, John F; Hall, Graham P; McDonald, Paul G

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys. PMID:27020132

  18. A Proposal and Implement of Detection and Reconstruction Method of Contact Shape with Horizon View Camera for Calligraphy Education Support System

    NASA Astrophysics Data System (ADS)

    Tobitani, Kensuke; Yamamoto, Kazuhiko; Kato, Kunihito

    In this study, we are concerned with calligraphy education support system. In current calligraphy education in Japan, teachers evaluate character written by students and they teach correct writing process based on the evaluation of the written character. Professionals in calligraphy can estimate writing process of character and balance of character which are important points for evaluation of character by estimating movement of contact shape (contact faces with paper and brush). But it takes a lot of time for students to be able to learn how to write correct character in this education way. If teachers and students can know movement of the contact shape, calligraphy education will be more efficient. However, it is difficult to detect contact shape from an images captured by cameras set in general angle. Because brush and ink are black either. So, contact shape is hided under the brush. In this paper, we propose new camera system consists of four Horizon View Cameras (HVC) which are special camera setting to detect and reconstruct contact shape, experiment with this system, and compare movement of contact shape of professionals and amateurs.

  19. Estimating where and how animals travel: an optimal framework for path reconstruction from autocorrelated tracking data.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2016-03-01

    An animal's trajectory is a fundamental object of interest in movement ecology, as it directly informs a range of topics from resource selection to energy expenditure and behavioral states. Optimally inferring the mostly unobserved movement path and its dynamics from a limited sample of telemetry observations is a key unsolved problem, however. The field of geostatistics has focused significant attention on a mathematically analogous problem that has a statistically optimal solution coined after its inventor, Krige. Kriging revolutionized geostatistics and is now the gold standard for interpolating between a limited number of autocorrelated spatial point observations. Here we translate Kriging for use with animal movement data. Our Kriging formalism encompasses previous methods to estimate animal's trajectories--the Brownian bridge and continuous-time correlated random walk library--as special cases, informs users as to when these previous methods are appropriate, and provides a more general method when they are not. We demonstrate the capabilities of Kriging on a case study with Mongolian gazelles where, compared to the Brownian bridge, Kriging with a more optimal model was 10% more precise in interpolating locations and 500% more precise in estimating occurrence areas. PMID:27197385

  20. Estimating where and how animals travel: an optimal framework for path reconstruction from autocorrelated tracking data.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2016-03-01

    An animal's trajectory is a fundamental object of interest in movement ecology, as it directly informs a range of topics from resource selection to energy expenditure and behavioral states. Optimally inferring the mostly unobserved movement path and its dynamics from a limited sample of telemetry observations is a key unsolved problem, however. The field of geostatistics has focused significant attention on a mathematically analogous problem that has a statistically optimal solution coined after its inventor, Krige. Kriging revolutionized geostatistics and is now the gold standard for interpolating between a limited number of autocorrelated spatial point observations. Here we translate Kriging for use with animal movement data. Our Kriging formalism encompasses previous methods to estimate animal's trajectories--the Brownian bridge and continuous-time correlated random walk library--as special cases, informs users as to when these previous methods are appropriate, and provides a more general method when they are not. We demonstrate the capabilities of Kriging on a case study with Mongolian gazelles where, compared to the Brownian bridge, Kriging with a more optimal model was 10% more precise in interpolating locations and 500% more precise in estimating occurrence areas.

  1. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  2. Development of event reconstruction algorithm for full-body gamma-camera based on SiPMs

    NASA Astrophysics Data System (ADS)

    Philippov, D. E.; Belyaev, V. N.; Buzhan, P. Zh; Ilyin, A. L.; Popova, E. V.; Stifutkin, A. A.

    2016-02-01

    The gamma-camera is the detector for nuclear medical imaging where the photomultiplier tubes (PMTs) could be replaced by the silicon photomultipliers (SiPMs). Common systems have the energy resolution about 10% and intrinsic spatial resolution about 3 mm (FWHM). In order to achieve the requirement energy and spatial resolution the classical Anger's logic should be modified. In case of a standard monolithic thallium activated sodium iodide scintillator (500x400x10 mm3) and SiPM readout it could be done with identification of the clusters. We show that this approach has a good results with the simulated data.

  3. A new target reconstruction method considering atmospheric refraction

    NASA Astrophysics Data System (ADS)

    Zuo, Zhengrong; Yu, Lijuan

    2015-12-01

    In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.

  4. SLAM using camera and IMU sensors.

    SciTech Connect

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  5. Field experiment and image reconstruction using a Fourier telescopy imaging system over a 600-m-long horizontal path.

    PubMed

    Yu, Shu-Hai; Dong, Lei; Liu, Xin-Yue; Lin, Xu-Dong; Megn, Hao-Ran; Zhong, Xing

    2016-08-20

    To confirm the effect of uplink atmospheric turbulence on Fourier telescopy (FT), we designed a system for far-field imaging, utilizing a T-type laser transmitting configuration with commercially available hardware, except for a green imaging laser. The horizontal light transmission distance for both uplink and downlink was ∼300  m. For both the transmitting and received beams, the height upon the ground was below 1 m. The imaging laser's pointing accuracy was ∼9.3  μrad. A novel image reconstruction approach was proposed, yielding significantly improved quality and Strehl ratio of reconstructed images. From the reconstruction result, we observed that the tip/tilt aberration is tolerated by the FT system even for Changchun's atmospheric coherence length parameter (r0) below 3 cm. The resolution of the reconstructed images was ∼0.615  μrad. PMID:27556991

  6. Open-path TDL-Spectrometry for a Tomographic Reconstruction of 2D H2O-Concentration Fields in the Soil-Air-Boundary-Layer of Permafrost

    NASA Astrophysics Data System (ADS)

    Seidel, Anne; Wagner, Steven; Dreizler, Andreas; Ebert, Volker

    2013-04-01

    .9 ppmv?m??Hz. For absorption path lengths of up to 2 m and time resolution of 0.2 sec, we attained detection limits of 1 ppmv. Furthermore we realized a wide dynamic range covering concentrations between 200 ppmv and 12300 ppmv. The static spectrometer will now be extended to a spatially scanning TDL sensor using rapidly rotating polygon mirrors. In combination with tomographic reconstruction methods, spatially resolved 2D-fields will be measured and retrieved. The aim is to capture concentration fields with at least 1 m2 spatial coverage with concentration detection faster than 1 Hz rate. We simulated various measurements from typical concentration distributions ("phantoms") and used Algebraic Reconstruction Techniques (ART) to compute the according 2D-fields. The reconstructions look very promising and demonstrate the potential of the measurement method. In the presentation we will describe and discuss the optical setup of the stationary instrument and explain the concept of extending this instrument to a spatially scanning tomographic TDL instrument for soil studies. Further we present first results evaluating the capabilities of the selected ART reconstruction on tomographic phantoms. [1] E. Schuur, J. G. Vogel, K. G. Crummer, H. Lee, J. O. Sickman, and T. E. Osterkamp, "The effect of permafrost thaw on old carbon release and net carbon exchange from tundra.," Nature, vol. 459, no. 7246, pp. 556-9, May 2009. [2] A. Seidel, S. Wagner, and V. Ebert, "TDLAS-based open-path laser hygrometer using simple reflective foils as scattering targets," Applied Physics B, vol. 109, no. 3, pp. 497-504, Oct. 2012.

  7. Snapshot polarimeter fundus camera.

    PubMed

    DeHoog, Edward; Luo, Haitao; Oka, Kazuhiko; Dereniak, Eustace; Schwiegerling, James

    2009-03-20

    A snapshot imaging polarimeter utilizing Savart plates is integrated into a fundus camera for retinal imaging. Acquired retinal images can be processed to reconstruct Stokes vector images, giving insight into the polarization properties of the retina. Results for images from a normal healthy retina and retinas with pathology are examined and compared. PMID:19305463

  8. Nanoscale three-dimensional reconstruction of elastic and inelastic mean free path lengths by electron holographic tomography

    SciTech Connect

    Lubk, A.; Wolf, D.; Kern, F.; Röder, F.; Lichte, H.; Prete, P.; Lovergine, N.

    2014-10-27

    Electron holography at medium resolution simultaneously probes projected electrostatic and magnetostatic potentials as well as elastic and inelastic attenuation coefficients with a spatial resolution of a few nanometers. In this work, we derive how the elastic and inelastic attenuation can be disentangled. Using that result, we perform the first three dimensional tomographic reconstruction of potential and (in)elastic attenuation in parallel. The technique can be applied to distinguish between functional potentials and composition changes in nanostructures, as demonstrated using the example of a GaAs—Al{sub 0.33}Ga{sub 0.67}As core-shell nanowire.

  9. How physics teachers approach innovation: An empirical study for reconstructing the appropriation path in the case of special relativity

    NASA Astrophysics Data System (ADS)

    de Ambrosis, Anna; Levrini, Olivia

    2010-07-01

    This paper concerns an empirical study carried out with a group of high school physics teachers engaged in the Module on relativity of a Master course on the teaching of modern physics. The study is framed within the general research issue of how to promote innovation in school via teachers’ education and how to foster fruitful interactions between research and school practice via the construction of networks of researchers and teachers. In the paper, the problems related to innovation are addressed by focusing on the phase during which teachers analyze an innovative teaching proposal in the perspective of designing their own paths for the class work. The proposal analyzed in this study is Taylor and Wheeler’s approach for teaching special relativity. The paper aims to show that the roots of problems known in the research literature about teachers’ difficulties in coping with innovative proposals, and usually related to the implementation process, can be found and addressed already when teachers approach the proposal and try to appropriate it. The study is heuristic and has been carried out in order to trace the “appropriation path,” followed by the group of teachers, in terms of the main steps and factors triggering the progressive evolution of teachers’ attitudes and competences.

  10. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  11. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  12. SU-E-J-141: Activity-Equivalent Path Length Approach for the 3D PET-Based Dose Reconstruction in Proton Therapy

    SciTech Connect

    Attili, A; Vignati, A; Giordanengo, S; Kraan, A; Dalmasso, F; Battistoni, G

    2015-06-15

    Purpose: Ion beam therapy is sensitive to uncertainties from treatment planning and dose delivery. PET imaging of induced positron emitter distributions is a practical approach for in vivo, in situ verification of ion beam treatments. Treatment verification is usually done by comparing measured activity distributions with reference distributions, evaluated in nominal conditions. Although such comparisons give valuable information on treatment quality, a proper clinical evaluation of the treatment ultimately relies on the knowledge of the actual delivered dose. Analytical deconvolution methods relating activity and dose have been studied in this context, but were not clinically applied. In this work we present a feasibility study of an alternative approach for dose reconstruction from activity data, which is based on relating variations in accumulated activity to tissue density variations. Methods: First, reference distributions of dose and activity were calculated from the treatment plan and CT data. Then, the actual measured activity data were cumulatively matched with the reference activity distributions to obtain a set of activity-equivalent path lengths (AEPLs) along the rays of the pencil beams. Finally, these AEPLs were used to deform the original dose distribution, yielding the actual delivered dose. The method was tested by simulating a proton therapy treatment plan delivering 2 Gy on a homogeneous water phantom (the reference), which was compared with the same plan delivered on a phantom containing inhomogeneities. Activity and dose distributions were were calculated by means of the FLUKA Monte Carlo toolkit. Results: The main features of the observed dose distribution in the inhomogeneous situation were reproduced using the AEPL approach. Variations in particle range were reproduced and the positions, where these deviations originated, were properly identified. Conclusions: For a simple inhomogeneous phantom the 3D dose reconstruction from PET

  13. A Compton camera application for the GAMOS GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Arce, P.; Judson, D. S.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Dormand, J.; Jones, M.; Nolan, P. J.; Sampson, J. A.; Scraggs, D. P.; Sweeney, A.; Lazarus, I.; Simpson, J.

    2012-04-01

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  14. Space Camera

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Nikon's F3 35mm camera was specially modified for use by Space Shuttle astronauts. The modification work produced a spinoff lubricant. Because lubricants in space have a tendency to migrate within the camera, Nikon conducted extensive development to produce nonmigratory lubricants; variations of these lubricants are used in the commercial F3, giving it better performance than conventional lubricants. Another spinoff is the coreless motor which allows the F3 to shoot 140 rolls of film on one set of batteries.

  15. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  16. Reconstructing the protracted P-T-t-d path of a giant ultrahigh-pressure terrane: Linking in-situ techniques with multiple methods of conventional geochronology (Invited)

    NASA Astrophysics Data System (ADS)

    Kylander-Clark, A. R.; Hacker, B. R.

    2010-12-01

    The processes that govern the genesis of ultrahigh-pressure (UHP) terranes directly affect those that control the growth and decay of mountain belts, the modification of continental crust, the geochemical evolution of the mantle, and the forces acting on tectonic plates. As such, our understanding of the timing, rates, and/or depths of subduction, exhumation, residence time, deformation, melting and cooling is critical in understanding the aforementioned geologic phenomena. No one sample location or geochronologic technique can properly reconstruct the pressure-temperature-time-deformation history of a terrane, and older terranes are further hampered by the necessity for high-precision geochronologic data that in-situ techniques cannot provide. In this study, we present an array of geochronologic techniques, which allow us to reconstruct much of the P-T-t-d path of the Western Gneiss Region (WGR) UHP terrane, western Norway. These ages can only be linked to specific events by determining the chemical and/or petrologic relation between the chronometer and the whole rock. The Western Gneiss Region was formed in a portion of Baltica that was subducted westward beneath Laurentia during the Caledonian orogeny in the Silurian to Late Devonian. Garnets of eclogites with high-Lu cores (analyzed via LA-ICPMS), yield the oldest Lu-Hf ages, indicating that subduction began by ~420 Ma. Eclogite-facies conditions are commonly linked to zircon ages (ca. 415-400 Ma), although few studies report both petrologic and chemical links to such conditions; rare-earth element data must be compared to either inherited cores or the whole rock. Chemical-abrasion TIMS ages of zircon from weakly- to non-deformed dikes indicate that melting occurred at ca. 400 Ma and subsequent deformation in these areas was minimal. LA-ICPMS data from these melt-related zircons indicate that melting took place at sub-eclogite-facies pressures and thus that initial exhumation of the WGR was rapid. Conventional

  17. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve

  18. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  19. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  20. Nikon Camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Nikon FM compact has simplification feature derived from cameras designed for easy, yet accurate use in a weightless environment. Innovation is a plastic-cushioned advance lever which advances the film and simultaneously switches on a built in light meter. With a turn of the lens aperture ring, a glowing signal in viewfinder confirms correct exposure.

  1. Neutron imaging camera

    NASA Astrophysics Data System (ADS)

    Hunter, S. D.; de Nolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-04-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3_DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, ~0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. The performance of the NIC from laboratory is presented.

  2. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  3. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  4. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  5. Path Finder

    SciTech Connect

    Rigdon, J. Brian; Smith, Marcus Daniel; Mulder, Samuel A

    2014-01-07

    PathFinder is a graph search program, traversing a directed cyclic graph to find pathways between labeled nodes. Searches for paths through ordered sequences of labels are termed signatures. Determining the presence of signatures within one or more graphs is the primary function of Path Finder. Path Finder can work in either batch mode or interactively with an analyst. Results are limited to Path Finder whether or not a given signature is present in the graph(s).

  6. Calibration method for a central catadioptric-perspective camera system.

    PubMed

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  7. High-resolution light field reconstruction using a hybrid imaging system.

    PubMed

    Wang, Xiang; Li, Lin; Hou, GuangQi

    2016-04-01

    Recently, light field cameras have drawn much attraction for their innovative performance in photographic and scientific applications. However, narrow baselines and constrained spatial resolution of current light field cameras impose restrictions on their usability. Therefore, we design a hybrid imaging system containing a light field camera and a high-resolution digital single lens reflex camera, and these two kinds of cameras share the same optical path with a beam splitter so as to achieve the reconstruction of high-resolution light fields. The high-resolution 4D light fields are reconstructed with a phase-based perspective variation strategy. First, we apply complex steerable pyramid decomposition on the high-resolution image from the digital single lens reflex camera. Then, we perform phase-based perspective-shift processing with the disparity value, which is extracted from the upsampled light field depth map, to create high-resolution synthetic light field images. High-resolution digital refocused images and high-resolution depth maps can be generated in this way. Furthermore, controlling the magnitude of the perspective shift enables us to change the depth of field rendering in the digital refocused images. We show several experimental results to demonstrate the effectiveness of our approach.

  8. SPEIR: A Ge Compton Camera

    SciTech Connect

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  9. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  10. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  11. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  12. Reconstruction of passive open-path FTIR ambient spectra using meteorological measurements and its application for detection of aerosol cloud drift.

    PubMed

    Kira, Oz; Dubowski, Yael; Linker, Raphael

    2015-07-27

    Remote sensing of atmospheric aerosols is of great importance to public and environmental health. This research promotes a simple way of detecting an aerosol cloud using a passive Open Path FTIR (OP-FTIR) system, without utilizing radiative transfer models and without relying on an artificial light source. Meteorological measurements (temperature, relative humidity and solar irradiance), and chemometric methods (multiple linear regression and artificial neural networks) together with previous cloud-free OP-FTIR measurements were used to estimate the ambient spectrum in real time. The cloud detection process included a statistical comparison between the estimated cloud-free signal and the measured OP-FTIR signal. During the study we were able to successfully detect several aerosol clouds (water spray) in controlled conditions as well as during agricultural pesticide spraying in an orchard.

  13. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  14. Reconstruction Of Anatomical Shapes From Moire Contourographs

    NASA Astrophysics Data System (ADS)

    Saunders, Carl G.

    1983-07-01

    A Moire system which rotates an object in front of a slit camera has been used to obtain continuous photographic maps around amputee socket and shoe last shapes. Previous analysis methods required the use of IBM 370 hardware and extensive software overhead. Using a systematic manual digitizing technique and user-interactive FORTRAN software, the shape reconstruction has been easily performed on a PDP-11 minicomputer system. Both the digitizing technique and the software are oriented towards the shape reproduction process. Numerically controlled machining parameters are used to identify a "skewed" grid of required points along the cutter path. Linear interpolation and anti-interference techniques resulted in reproduction of shoe lasts to within 0.05 inches (1.2 millimeters) from the sensing axis. Difficulties were experienced in obtaining information to resolve the ends of the shapes. Current efforts focus on circumferential shape sensing of live subjects and automatic digitization of sensed data.

  15. Path ANalysis

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes duringmore » courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.« less

  16. Path ANalysis

    SciTech Connect

    Snell, Mark K.

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes during courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.

  17. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm. PMID:26685238

  18. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  19. Unified framework for recognition, localization and mapping using wearable cameras.

    PubMed

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  20. Unified framework for recognition, localization and mapping using wearable cameras.

    PubMed

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  1. Bin mode estimation methods for Compton camera imaging

    NASA Astrophysics Data System (ADS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-10-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods.

  2. Action selection for single-camera SLAM.

    PubMed

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map.

  3. Action selection for single-camera SLAM.

    PubMed

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map. PMID:20350845

  4. A new inclination shallowing correction of the Mauch Chunk Formation of Pennsylvania, based on high-field AIR results: Implications for the Carboniferous North American APW path and Pangea reconstructions

    NASA Astrophysics Data System (ADS)

    Bilardello, Dario; Kodama, Kenneth P.

    2010-10-01

    A new magnetic anisotropy study was performed on samples of the Lower Carboniferous Mauch Chunk Formation of Pennsylvania. These red beds had been sampled for an inclination shallowing study by Tan and Kodama (2002), however, application of a high-field anisotropy of isothermal remanence magnetization (hf-AIR) technique specifically designed to measure the anisotropy of hematite provides considerably different results from those previously reported. The newly measured fabric has smaller anisotropy (~ 9-17% as opposed to ~ 25-40%) and shows a pronounced ENE-WSW magnetic lineation that is sub-parallel to the trend of the Appalachians and interpretable as a hematite intersection lineation that occurred during local NNW-directed shortening. The measured magnetic fabric yields a new inclination correction with a corrected paleopole that is in better agreement with recently corrected Carboniferous paleopoles than the previously corrected Mauch Chunk paleopole, defining a more consistent APW path. The corrected paleopoles allow calculation of new mean Early (~ 325 Ma) and Late (~ 312 Ma) Carboniferous inclination-corrected paleopoles for North America, which can be compared to coeval, but uncorrected, paleopoles from Gondwana. Results suggest a Pangea B assemblage unless inclination shallowing is considered for Gondwana. Estimating an inclination correction for Gondwana sedimentary rock-derived paleopoles permits a Pangea A-type assemblage at higher southern latitudes than previous reconstructions, which we term Pangea A3.

  5. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  6. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  7. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  8. Nanosecond frame cameras

    SciTech Connect

    Frank, A M; Wilkins, P R

    2001-01-05

    The advent of CCD cameras and computerized data recording has spurred the development of several new cameras and techniques for recording nanosecond images. We have made a side by side comparison of three nanosecond frame cameras, examining them for both performance and operational characteristics. The cameras include; Micro-Channel Plate/CCD, Image Diode/CCD and Image Diode/Film; combinations of gating/data recording. The advantages and disadvantages of each device will be discussed.

  9. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  10. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  11. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  12. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  13. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  15. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  16. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  17. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  18. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1984-09-28

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (uv to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 keV x-rays.

  19. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1989-03-21

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras is disclosed. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1,000 KeV x-rays. 3 figs.

  20. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  1. 3D astigmatic depth sensing camera

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Tyo, J. Scott; Schwiegerling, Jim

    2011-10-01

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture threedimensional images inexpensively and without major modifications to current cameras is uncommon. Our goal is to create a modification to a common commercial camera that allows a three dimensional reconstruction. We desire such an imaging system to be inexpensive and easy to use. Furthermore, we require that any three-dimensional modification to a camera does not reduce its resolution. Here we present a possible solution to this problem. A commercial digital camera is used with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of the projected pattern, thereby encoding depth. This projector could be integrated into the flash unit of the camera. By carefully choosing a pattern we are able to exploit this differential focus in image processing. Wavelet transforms are performed on the image that pick out the projected pattern. By taking ratios of certain wavelet coefficients we are able to correlate the distance an object at a particular transverse position is from the camera to the contrast ratios. We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  2. Calibration of multi-camera photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Detchev, I.; Mazaheri, M.; Rondeel, S.; Habib, A.

    2014-11-01

    Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.

  3. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  4. Overview in two parts: Right view showing orchard path on ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Overview in two parts: Right view showing orchard path on left eucalyptus windbreak bordering knoll on right. Camera facing 278" west. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA

  5. Polarization encoded color camera.

    PubMed

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  6. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  7. LSST Camera Optics Design

    SciTech Connect

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  8. Opportunity's Path

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This Long Term Planning graphic was created from a mosaic of navigation camera images overlain by a polar coordinate grid with the center point as Opportunity's original landing site. The blue dots represent the rover position at various locations.

    The red dots represent the center points of the target areas for the instruments on the rover mast (the panoramic camera and miniature thermal emission spectrometer). Opportunity visited Stone Mountain on Feb. 5. Stone Mountain was named after the southernmost point of the Appalachian Mountains outside of Atlanta, Ga. On Earth, Stone Mountain is the last big mountain before the Piedmont flatlands, and on Mars, Stone Mountain is at one end of Opportunity Ledge. El Capitan is a target of interest on Mars named after the second highest peak in Texas in Guadaloupe National Park, which is one of the most visited outcrops in the United States by geologists. It has been a training ground for students and professional geologists to understand what the layering means in relation to the formation of Earth, and scientists will study this prominent point of Opportunity Ledge to understand what the layering means on Mars.

    The yellow lines show the midpoint where the panoramic camera has swept and will sweep a 120-degree area from the three waypoints on the tour of the outcrop. Imagine a fan-shaped wedge from left to right of the yellow line.

    The white contour lines are one meter apart, and each drive has been roughly about 2-3 meters in length over the last few sols. The large white blocks are dropouts in the navigation camera data.

    Opportunity is driving along and taking a photographic panorama of the entire outcrop. Scientists will stitch together these images and use the new mosaic as a 'base map' to decide on geology targets of interest for a more detailed study of the outcrop using the instruments on the robotic arm. Once scientists choose their targets of interest, they plan to study the outcrop for roughly five to

  9. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  10. Lightweight, Compact, Long Range Camera Design

    NASA Astrophysics Data System (ADS)

    Shafer, Donald V.

    1983-08-01

    The model 700 camera is the latest in a 30-year series of LOROP cameras developed by McDonnell Douglas Astronautics Company (MDAC) and their predecessor companies. The design achieves minimum size and weight and is optimized for low-contrast performance. The optical system includes a 66-inch focal length, f/5.6, apochromatic lens and three folding mirrors imaging on a 4.5-inch square format. A three-axis active stabilization system provides the capability for long exposure time and, hence, fine grain films can be used. The optical path forms a figure "4" behind the lens. In front of the lens is a 45° pointing mirror. This folded configuration contributed greatly to the lightweight and compact design. This sequential autocycle frame camera has three modes of operation with one, two, and three step positions to provide a choice of swath widths within the range of lateral coverage. The magazine/shutter assembly rotates in relationship with the pointing mirror and aircraft drift angle to maintain film format alignment with the flight path. The entire camera is angular rate stabilized in roll, pitch, and yaw. It also employs a lightweight, electro-magnetically damped, low-natural-frequency spring suspension for passive isolation from aircraft vibration inputs. The combined film transport and forward motion compensation (FMC) mechanism, which is operated by a single motor, is contained in a magazine that can, depending on accessibility which is installation dependent, be changed in flight. The design also stresses thermal control, focus control, structural stiffness, and maintainability. The camera is operated from a remote control panel. This paper describes the leading particulars and features of the camera as related to weight and configuration.

  11. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  12. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    NASA Astrophysics Data System (ADS)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  13. Mechanical Design of the LSST Camera

    SciTech Connect

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; Ku, John; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  14. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  15. Ice and thermal cameras for stream flow observations

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Petroselli, Andrea; Grimaldi, Salvatore

    2016-04-01

    Flow measurements are instrumental to establish discharge rating curves and to enable flood risk forecast. Further, they are crucial to study erosion dynamics and to comprehend the organization of drainage networks in natural catchments. Flow observations are typically executed with intrusive instrumentation, such as current meters or acoustic devices. Alternatively, non-intrusive instruments, such as radars and microwave sensors, are applied to estimate surface velocity. Both approaches enable flow measurements over areas of limited extent, and their implementation can be costly. Optical methods, such as large scale particle image velocimetry, have proved beneficial for non-intrusive and spatially-distributed environmental monitoring. In this work, a novel optical-based approach is utilized for surface flow velocity observations based on the combined use of a thermal camera and ice dices. Different from RGB imagery, thermal images are relatively unaffected by illumination conditions and water reflections. Therefore, such high-quality images allow to readily identify and track tracers against the background. Further, the optimal environmental compatibility of ice dices and their relative ease of preparation and storage suggest that the technique can be easily implemented to rapidly characterize surface flows. To demonstrate the validity of the approach, we present a set of experiments performed on the Brenta stream, Italy. In the experimental setup, the axis of the camera is maintained perpendicular with respect to the water surface to circumvent image orthorectification through ground reference points. Small amounts of ice dices are deployed onto the stream water surface during image acquisition. Particle tracers' trajectories are reconstructed off-line by analyzing thermal images with a particle tracking velocimetry (PTV) algorithm. Given the optimal visibility of the tracers and their low seeding density, PTV allows for efficiently following tracers' paths in

  16. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  17. The Camera Cook Book.

    ERIC Educational Resources Information Center

    Education Development Center, Inc., Newton, MA.

    Intended for use with the photographic materials available from the Workshop for Learning Things, Inc., this "camera cookbook" describes procedures that have been tried in classrooms and workshops and proven to be the most functional and inexpensive. Explicit starting off instructions--directions for exploring and loading the camera and for taking…

  18. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  19. CCD Luminescence Camera

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  20. Stereoscopic liver surface reconstruction

    PubMed Central

    Karwan, Adam; Rudnicki, Jerzy; Wróblewski, Tadeusz

    2012-01-01

    The paper presents a practical approach to measuring liver motion, both respiratory and laparoscopic, with a tool guided in the operating room. The presented method is based on standard operating room equipment, i.e. rigid laparoscopic cameras and a single incision laparoscopic surgery trocar. The triangulation algorithm is used and stereo correspondence points are marked manually by two independent experts. To calibrate the cameras two perpendicular chessboards, a pinhole camera model and a Tsai algorithm are used. The data set consists of twelve real liver surgery video sequences: ten open surgery and two laparoscopic, gathered from different patients. The setup equipment and methodology are presented. The proposed evaluation method based on both calibration points of the chessboard reconstruction and measurements made by the Polaris Vicra tracking system are used as a reference system. In the analysis stage we focused on two specific goals, measuring respiration and laparoscopic tool guided liver motions. We have presented separate examples for left and right liver lobes. It is possible to reconstruct liver motion using the SILS trocar. Our approach was made without additional position or movement sensors. Diffusion of cameras and laser for distance measurement seems to be less practical for in vivo laparoscopic data, but we do not exclude exploring such sensors in further research. PMID:23256023

  1. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  2. A new inclination shallowing correction of the Mauch Chunk Formation of Pennsylvania, based on high field-AIR results: Implications for the Carboniferous North American APW path and Pangea reconstructions

    NASA Astrophysics Data System (ADS)

    Bilardello, D.; Kodama, K. P.

    2010-12-01

    A new rock-magnetic study was performed on samples of the Lower Carboniferous Mauch Chunk Formation of Pennsylvania. These red beds had been sampled for an inclination shallowing study by Tan and Kodama (2002). High anisotropy values lead Kodama (2009) to suspect that the Formation had been affected by strain. However, more detailed rock-magnetic measurements also show that both magnetite and hematite contribute to the remanence, leading to the application of a high field anisotropy of isothermal remanence magnetization (hf-AIR) technique specifically designed to isolate the anisotropy of the hematite, the characteristic remanence carrier. The newly measured fabric has a smaller anisotropy than Kodama (2009) observed (~9-17% as opposed to ~25-40%) and shows a pronounced ENE-WSW magnetic lineation that is sub-parallel to the trend of the Appalachians and interpretable as a hematite intersection lineation that occurred during local NNW-directed shortening. Results also yield a much different AIR/ anisotropy of magnetic susceptibility (AMS) relationship than previously reported. We attribute the differences in the AIR/AMS relationship to varying concentrations of magnetite. Because the AIR/AMS relationship has been used to constrain the individual particle anisotropy we suggest this approach to determine grain anisotropy is invalid, at least until the AIR/AMS relationship for single domain hematite only is measured. The measured magnetic fabric yields a new inclination correction with a corrected paleopole that is in better agreement with recently corrected Carboniferous paleopoles than the previously corrected Mauch Chunk paleopole, defining a more consistent APW path. The corrected paleopoles allow calculation of new mean Early (~325 Ma) and Late (~312 Ma) Carboniferous inclination-corrected paleopoles for North America, which can be compared to coeval, but uncorrected, paleopoles from Gondwana. Results suggest a Pangea B assemblage unless Gondwanan sedimentary

  3. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  5. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  6. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images. PMID:9608471

  7. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  8. Gamma camera purchasing.

    PubMed

    Wells, C P; Buxton-Thomas, M

    1995-03-01

    The purchase of a new gamma camera is a major undertaking and represents a long-term commitment for most nuclear medicine departments. The purpose of tendering for gamma cameras is to assess the best match between the requirements of the clinical department and the equipment available and not necessarily to buy the 'best camera' [1-3]. After many years of drawing up tender specifications, this paper tries to outline some of the traps and pitfalls of this potentially perilous, although largely rewarding, exercise. PMID:7770241

  9. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  10. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  11. Advanced CCD camera developments

    SciTech Connect

    Condor, A.

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  12. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  13. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  14. Miniature TV Camera

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Originally devised to observe Saturn stage separation during Apollo flights, Marshall Space Flight Center's Miniature Television Camera, measuring only 4 x 3 x 1 1/2 inches, quickly made its way to the commercial telecommunications market.

  15. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  16. Gamma ray camera

    SciTech Connect

    Robbins, C.D.; Wang, S.

    1980-09-09

    An anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the anger camera, the image intensifier tube having a negatively charged flat scintillator screen and a flat photocathode layer and a grounded, flat output phosphor display screen all of the same dimension (Unity image magnification) and all within a grounded metallic tube envelope and having a metallic, inwardly concaved input window between the scintillator screen and the collimator.

  17. Mapping the Sun's Path with a Pinhole Camera.

    ERIC Educational Resources Information Center

    Roberts, James A.

    1995-01-01

    Presents an experiment to demonstrate the diurnal and seasonal motions of the sun, which can be used in different grade levels, depending on the degree of difficulty required for the analysis, as an effort to generate student interest in the scientific method. Includes an activity to teach students elementary concepts of the sun's apparent motion…

  18. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. 9. VIEW OF CAMERA STATIONS UNDER CONSTRUCTION INCLUDING CAMERA CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. VIEW OF CAMERA STATIONS UNDER CONSTRUCTION INCLUDING CAMERA CAR ON RAILROAD TRACK AND FIXED CAMERA STATION 1400 (BUILDING NO. 42021) ABOVE, ADJACENT TO STATE HIGHWAY 39, LOOKING WEST, March 23, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  20. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  1. Multiple views merging from different cameras in fringe-projection based phase-shifting method

    NASA Astrophysics Data System (ADS)

    Hu, Qingying; Harding, Kevin; Hamilton, Don; Flint, Jay

    2007-09-01

    This paper discusses issues related to accurate measurement using multiple cameras with phase-shifting techniques. Phase-shifting methods have been widely used in industrial inspections due to high accuracy and excellent tolerance to surface finish. But so far, most such systems use only one camera. In our applications to inspect manufactured part with complex shapes, one camera cannot capture the whole surface because of occlusions, double bounced light, and the limited dynamic range of cameras. Multiple cameras have to be used and the data from different cameras must be merged together. Because different cameras have individual error sources when a part is to be measured, it is a challenge to obtain the same shape, in the same 3D coordinates system from all cameras without data manipulation such as iterative registration. This paper addresses this challenge of data registration. The error sources are analyzed and demonstrated and several paths for error reduction are presented. Experiment results show the significant improvement obtained.

  2. THE DARK ENERGY CAMERA

    SciTech Connect

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J.; Honscheid, K.; Abbott, T. M. C.; Bonati, M.; Antonik, M.; Brooks, D.; Ballester, O.; Cardiel-Sas, L.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Boprie, D.; Campa, J.; Castander, F. J.; Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  3. The CAMCAO infrared camera

    NASA Astrophysics Data System (ADS)

    Amorim, Antonio; Melo, Antonio; Alves, Joao; Rebordao, Jose; Pinhao, Jose; Bonfait, Gregoire; Lima, Jorge; Barros, Rui; Fernandes, Rui; Catarino, Isabel; Carvalho, Marta; Marques, Rui; Poncet, Jean-Marc; Duarte Santos, Filipe; Finger, Gert; Hubin, Norbert; Huster, Gotthard; Koch, Franz; Lizon, Jean-Louis; Marchetti, Enrico

    2004-09-01

    The CAMCAO instrument is a high resolution near infrared (NIR) camera conceived to operate together with the new ESO Multi-conjugate Adaptive optics Demonstrator (MAD) with the goal of evaluating the feasibility of Multi-Conjugate Adaptive Optics techniques (MCAO) on the sky. It is a high-resolution wide field of view (FoV) camera that is optimized to use the extended correction of the atmospheric turbulence provided by MCAO. While the first purpose of this camera is the sky observation, in the MAD setup, to validate the MCAO technology, in a second phase, the CAMCAO camera is planned to attach directly to the VLT for scientific astrophysical studies. The camera is based on the 2kx2k HAWAII2 infrared detector controlled by an ESO external IRACE system and includes standard IR band filters mounted on a positional filter wheel. The CAMCAO design requires that the optical components and the IR detector should be kept at low temperatures in order to avoid emitting radiation and lower detector noise in the region analysis. The cryogenic system inclues a LN2 tank and a sptially developed pulse tube cryocooler. Field and pupil cold stops are implemented to reduce the infrared background and the stray-light. The CAMCAO optics provide diffraction limited performance down to J Band, but the detector sampling fulfills the Nyquist criterion for the K band (2.2mm).

  4. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  5. The Dark Energy Camera

    SciTech Connect

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  6. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems. PMID:27410361

  7. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  8. The Dark Energy Camera

    NASA Astrophysics Data System (ADS)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  9. Camera calibration using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hui, Nirmal Baran; Pratihar, Dilip Kumar

    2008-12-01

    An autonomous robot will have to detect moving obstacles online before it can plan its collision-free path, while navigating in a dynamic environment. The robot collects information about the environment with the help of a camera and determines the inputs for its motion planner through image analysis. The present article deals with issues related to camera calibration and online image processing. The problem of camera calibration is treated as an optimization problem and solved using a genetic algorithm so as to achieve minimum distorted image plane error. The calibrated vision system is then utilized for the detection and identification of the objects by analysing the images collected at regular intervals. For image processing, five different operations, such as median filtering, thresholding, perimeter estimation, labelling and size filtering, have been carried out. To show the effectiveness of the developed camera-based vision system, inputs of the motion planner of a navigating robot are calculated for two different cases. It is observed that online detection of the shapes and configurations of the obstacles is possible by using the vision system developed.

  10. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  11. Synchronizing A Television Camera With An External Reference

    NASA Technical Reports Server (NTRS)

    Rentsch, Edward M.

    1993-01-01

    Improvement in genlock subsystem consists in incorporation of controllable delay circuit into path of composite synchronization signal obtained from external video source. Delay circuit helps to eliminate potential jitter in video display and ensures setup requirements for digital timing circuits of video camera satisfied.

  12. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  13. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  14. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  15. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  16. GROT in NICMOS Cameras

    NASA Astrophysics Data System (ADS)

    Sosey, M.; Bergeron, E.

    1999-09-01

    Grot is exhibited as small areas of reduced sensitivity, most likely due to flecks of antireflective paint scraped off the optical baffles as they were forced against each other. This paper characterizes grot associated with all three cameras. Flat field images taken from March 1997 through January 1999 have been investigated for changes in the grot, including possible wavelength dependency and throughput characteristics. The main products of this analysis are grot masks for each of the cameras which may also contain any new cold or dead pixels not specified in the data quality arrays.

  17. Wide angle pinhole camera

    NASA Technical Reports Server (NTRS)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  18. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  19. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  20. Calibration of Low Cost RGB and NIR Uav Cameras

    NASA Astrophysics Data System (ADS)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  1. Camera calibration correction in shape from inconsistent silhouette

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  2. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  3. The LSST Camera Overview

    SciTech Connect

    Gilmore, Kirk; Kahn, Steven A.; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe; /SLAC

    2007-01-10

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100 C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  4. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  5. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  6. Communities, Cameras, and Conservation

    ERIC Educational Resources Information Center

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  7. Anger Camera Firmware

    2010-11-19

    The firmware is responsible for the operation of Anger Camera Electronics, calculation of position, time of flight and digital communications. It provides a first stage analysis of 48 signals from 48 analog signals that have been converted to digital values using A/D convertors.

  8. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  9. Advanced Virgo phase cameras

    NASA Astrophysics Data System (ADS)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  10. Imaging phoswich anger camera

    NASA Astrophysics Data System (ADS)

    Manchanda, R. K.; Sood, R. K.

    1991-08-01

    High angular resolution and low background are the primary requisites for detectors for future astronomy experiments in the low energy gamma-ray region. Scintillation counters are still the only available large area detector for studies in this energy range. Preliminary details of a large area phoswich anger camera designed for coded aperture imaging is described and its background and position characteristics are discussed.

  11. Millisecond readout CCD camera

    NASA Astrophysics Data System (ADS)

    Prokop, Mark; McCurnin, Thomas W.; Stradling, Gary L.

    1993-01-01

    We have developed a prototype of a fast-scanning CCD readout system to record a 1024 X 256 pixel image and transport the image to a recording station within 1 ms of the experimental event. The system is designed to have a dynamic range of greater than 1000 with adequate sensitivity to read single-electron excitations of a CRT phosphor when amplified by a microchannel plate image intensifier. This readout camera is intended for recording images from oscilloscopes, streak, and framing cameras. The sensor is a custom CCD chip, designed by LORAL Aeroneutronics. This CCD chip is designed with 16 parallel output ports to supply the necessary image transfer speed. The CCD is designed as an interline structure to allow fast clearing of the image and on-chip fast sputtering. Special antiblooming provisions are also included. The camera is designed to be modular and to allow CCD chips of other sizes to be used with minimal reengineering of the camera head.

  12. Millisecond readout CCD camera

    NASA Astrophysics Data System (ADS)

    Prokop, M.; McCurnin, T. W.; Stradling, G.

    We have developed a prototype of a fast-scanning CCD readout system to record a 1024 x 256 pixel image and transport the image to a recording station within 1 ms of the experimental event. The system is designed to have a dynamic range of greater than 1000 with adequate sensitivity to read single-electron excitations of a CRT phosphor when amplified by a microchannel plate image intensifier. This readout camera is intended for recording images from oscilloscopes, streak, and framing cameras. The sensor is a custom CCD chip, designed by LORAL Aeroneutronics. This CCD chip is designed with 16 parallel output ports to supply the necessary image transfer speed. The CCD is designed as an interline structure to allow fast clearing of the image and on-chip fast shuttering. Special antiblooming provisions are also included. The camera is designed to be modular and to allow CCD chips of other sizes to be used with minimal reengineering of the camera head.

  13. A stereo camera system for autonomous maritime navigation (AMN) vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, Weihong; Zhuang, Ping; Elkins, Les; Simon, Rick; Gore, David; Cogar, Jeff; Hildebrand, Kevin; Crawford, Steve; Fuller, Joe

    2009-05-01

    Spatial Integrated System (SIS), Rockville, Maryland, in collaboration with NSWC Combatant Craft Division (NSWCCD), is applying 3D imaging technology, artificial intelligence, sensor fusion, behaviors-based control, and system integration to a prototype 40 foot, high performance Research and Development Unmanned Surface Vehicle (USV). This paper focus on the developments of the stereo camera system in the USV navigation that currently consists of two high-resolution cameras and will incorporate an array of cameras in the near future. The objectives of the camera system are to re-construct 3D objects and detect them in the sea water surface. The paper reviews two critical technological components, namely camera calibration and stereo matching. In stereo matching, a comprehensive study is presented to compare the algorithmic performances resulted from the various information sources (intensity, RGB values, Gaussian gradients and Gaussian Laplacians), patching schemas (single windows, and multiple windows with same/different centers), and correlation metrics (convolution, absolute difference, and histogram). To enhance system performance, a sub-pixel edge detection technique has been introduced to address the precision requirement and a noise removal post-processing step added to eliminate noisy points from the reconstructed 3D point clouds. Finally, experimental results are reported to demonstrate the performance of the stereo camera system.

  14. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  15. Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera

    NASA Astrophysics Data System (ADS)

    Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi

    2015-12-01

    This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.

  16. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  17. Photometric Lunar Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  18. Realistic camera noise modeling with application to improved HDR synthesis

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  19. Contribution to the standardization of 3D measurements using a high-resolution PMD camera

    NASA Astrophysics Data System (ADS)

    Lietz, Henrik; Eberhardt, Jörg

    2015-09-01

    Three-dimensional image acquisition is still a growing field in optical metrology. Various methods are available to reconstruct an object's three-dimensional surface. The five main types of 3D cameras are stereo cameras, triangulation (pattern or laser scanning), interferometry, light-field cameras and ToF (time-of-flight) cameras. PMD (photonic mixing device) cameras measure the time of light, and thus belong to the field of ToF cameras. Each camera type has fields of application for which it is particularly well suited. Even within PMD cameras, there is a distinction made between applications for indoor and outdoor use. Until today, there is no method to measure and characterize 3D cameras uniformly. Desirable would be a method, which is able to measure all types of cameras equally. With this work, we want to contribute to the standardization of 3D cameras. In this case, we use a PMD camera for outdoor applications with relatively large pixels. It is shown how to determine the spatial resolution of a PMD camera from both, the amplitude and the distance image. Further, a novel method is presented how to determine the resolution enhancement in an image via gradient image evaluation. Finally, a method is proposed which evaluates the quality of resolution enhancement, when no ground truth data is available. Both are particularly interesting for the use of super-resolution (SR) applications.

  20. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Epipolar rectification method for a stereovision system with telecentric cameras

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Zhu, Zhaokun; Yao, Linshen; Dong, Jin; Chen, Shengyi; Zhang, Xiaohu; Shang, Yang

    2016-08-01

    3D metrology of a stereovision system requires epipolar rectification to be performed before dense stereo matching. In this study, we propose an epipolar rectification method for a stereovision system with two telecentric lens-based cameras. Given the orthographic projection matrices of each camera, the new projection matrices are computed by determining the new camera coordinates system in affine space and imposing some constraints on the intrinsic parameters. Then, the transformation that maps the old image planes on to the new image planes is achieved. Experiments are performed to validate the performance of the proposed rectification method. The test results show that the perpendicular distance and 3D reconstructed deviation obtained from the rectified images is not significantly higher than the corresponding values obtained from the original images. Considering the roughness of the extracted corner points and calibrated camera parameters, we can conclude that the proposed method can provide sufficiently accurate rectification results.

  3. Face liveness detection using a light field camera.

    PubMed

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2014-01-01

    A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos) and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks. PMID:25436651

  4. Face Liveness Detection Using a Light Field Camera

    PubMed Central

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2014-01-01

    A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos) and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks. PMID:25436651

  5. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  6. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  7. Combustion pinhole camera system

    DOEpatents

    Witte, A.B.

    1984-02-21

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.

  8. Combustion pinhole camera system

    DOEpatents

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  9. LSST Camera Optics

    SciTech Connect

    Olivier, S S; Seppala, L; Gilmore, K; Hale, L; Whistler, W

    2006-06-05

    The Large Synoptic Survey Telescope (LSST) is a unique, three-mirror, modified Paul-Baker design with an 8.4m primary, a 3.4m secondary, and a 5.0m tertiary feeding a camera system that includes corrector optics to produce a 3.5 degree field of view with excellent image quality (<0.3 arcsecond 80% encircled diffracted energy) over the entire field from blue to near infra-red wavelengths. We describe the design of the LSST camera optics, consisting of three refractive lenses with diameters of 1.6m, 1.0m and 0.7m, along with a set of interchangeable, broad-band, interference filters with diameters of 0.75m. We also describe current plans for fabricating, coating, mounting and testing these lenses and filters.

  10. NSTX Tangential Divertor Camera

    SciTech Connect

    A.L. Roquemore; Ted Biewer; D. Johnson; S.J. Zweben; Nobuhiro Nishino; V.A. Soukhanovskii

    2004-07-16

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor.

  11. Hemispherical Laue camera

    DOEpatents

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  12. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  13. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  14. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  15. Spaces of paths and the path topology

    NASA Astrophysics Data System (ADS)

    Low, Robert J.

    2016-09-01

    The natural topology on the space of causal paths of a space-time depends on the topology chosen on the space-time itself. Here we consider the effect of using the path topology on space-time instead of the manifold topology, and its consequences for how properties of space-time are reflected in the structure of the space of causal paths.

  16. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  17. Lights, Camera, Reflection!

    ERIC Educational Resources Information Center

    Mourlam, Daniel

    2013-01-01

    There are many ways to critique teaching, but few are more effective than video. Personal reflection through the use of video allows one to see what really happens in the classrooms--good and bad--and provides a visual path forward for improvement, whether it be in one's teaching, work with a particular student, or learning environment. This…

  18. Multiple-plane particle image velocimetry using a light-field camera.

    PubMed

    Skupsch, Christoph; Brücker, Christoph

    2013-01-28

    Planar velocity fields in flows are determined simultaneously on parallel measurement planes by means of an in-house manufactured light-field camera. The planes are defined by illuminating light sheets with constant spacing. Particle positions are reconstructed from a single 2D recording taken by a CMOS-camera equipped with a high-quality doublet lens array. The fast refocusing algorithm is based on synthetic-aperture particle image velocimetry (SAPIV). The reconstruction quality is tested via ray-tracing of synthetically generated particle fields. The introduced single-camera SAPIV is applied to a convective flow within a measurement volume of 30 x 30 x 50 mm³.

  19. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  20. Porcelain three-dimensional shape reconstruction and its color reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoyang; Wu, Haibin; Yang, Xue; Yu, Shuang; Wang, Beiyi; Chen, Deyun

    2013-01-01

    In this paper, structured light three-dimensional measurement technology was used to reconstruct the porcelain shape, and further more the porcelain color was reconstructed. So the accurate reconstruction of the shape and color of porcelain was realized. Our shape measurement installation drawing is given. Because the porcelain surface is color complex and highly reflective, the binary Gray code encoding is used to reduce the influence of the porcelain surface. The color camera was employed to obtain the color of the porcelain surface. Then, the comprehensive reconstruction of the shape and color was realized in Java3D runtime environment. In the reconstruction process, the space point by point coloration method is proposed and achieved. Our coloration method ensures the pixel corresponding accuracy in both of shape and color aspects. The porcelain surface shape and color reconstruction experimental results completed by proposed method and our installation, show that: the depth range is 860 ˜ 980mm, the relative error of the shape measurement is less than 0.1%, the reconstructed color of the porcelain surface is real, refined and subtle, and has the same visual effect as the measured surface.

  1. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  2. DEVICE CONTROLLER, CAMERA CONTROL

    1998-07-20

    This is a C++ application that is the server for the cameral control system. Devserv drives serial devices, such as cameras and videoswitchers used in a videoconference, upon request from a client such as the camxfgbfbx ccint program. cc Deverv listens on UPD ports for clients to make network contractions. After a client connects and sends a request to control a device (such as to pan,tilt, or zooma camera or do picture-in-picture with a videoswitcher),more » devserv formats the request into an RS232 message appropriate for the device and sends this message over the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port and then formats and sends via multicast a status message. In addition, devserv periodically multicasts status or description messages so that all clients connected to the multicast channel know what devices are supported and their ranges of motion and the current position. The software design employs a class hierarchy such that an abstract base class for devices can be subclassed into classes for various device categories(e.g. sonyevid30, cononvco4, panasonicwjmx50, etc.). which are further subclassed into classes for various device categories. The devices currently supported are the Sony evi-D30, Canon, VCC1, Canon VCC3, and Canon VCC4 cameras and the Panasonic WJ-MX50 videoswitcher. However, developers can extend the class hierarchy to support other devices.« less

  3. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  4. DEVICE CONTROLLER, CAMERA CONTROL

    SciTech Connect

    Perry, Marcia

    1998-07-20

    This is a C++ application that is the server for the cameral control system. Devserv drives serial devices, such as cameras and videoswitchers used in a videoconference, upon request from a client such as the camxfgbfbx ccint program. cc Deverv listens on UPD ports for clients to make network contractions. After a client connects and sends a request to control a device (such as to pan,tilt, or zooma camera or do picture-in-picture with a videoswitcher), devserv formats the request into an RS232 message appropriate for the device and sends this message over the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port and then formats and sends via multicast a status message. In addition, devserv periodically multicasts status or description messages so that all clients connected to the multicast channel know what devices are supported and their ranges of motion and the current position. The software design employs a class hierarchy such that an abstract base class for devices can be subclassed into classes for various device categories(e.g. sonyevid30, cononvco4, panasonicwjmx50, etc.). which are further subclassed into classes for various device categories. The devices currently supported are the Sony evi-D30, Canon, VCC1, Canon VCC3, and Canon VCC4 cameras and the Panasonic WJ-MX50 videoswitcher. However, developers can extend the class hierarchy to support other devices.

  5. Theta rotation and serial registration of light microscopical images using a novel camera rotating device.

    PubMed

    Duerstock, Bradley S; Cirillo, John; Rajwa, Bartek

    2010-06-01

    An electromechanical video camera coupler was developed to rotate a light microscope field of view (FOV) in real time without the need to physically rotate the stage or specimen. The device, referred to as the Camera Thetarotator, rotated microscopical views 240 degrees to assist microscopists to orient specimens within the FOV prior to image capture. The Camera Thetarotator eliminated the effort and artifacts created when rotating photomicrographs using conventional graphics software. The Camera Thetarotator could also be used to semimanually register a dataset of histological sections for three-dimensional (3D) reconstruction by superimposing the transparent, real-time FOV to the previously captured section in the series. When compared to Fourier-based software registration, alignment of serial sections using the Camera Thetarotator was more exact, resulting in more accurate 3D reconstructions with no computer-generated null space. When software-based registration was performed after prealigning sections with the Camera Thetarotator, registration was further enhanced. The Camera Thetarotator expanded microscopical viewing and digital photomicrography and provided a novel, accurate registration method for 3D reconstruction. The Camera Thetarotator would also be useful for performing automated microscopical functions necessary for telemicroscopy, high-throughput image acquisition and analysis, and other light microscopy applications.

  6. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  7. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  8. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  9. Stochastic reconstruction of sandstones

    PubMed

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples.

  10. [3D reconstruction of multiple views based on trifocal tensor].

    PubMed

    Chen, Chunxiao; Zhang, Juan

    2012-08-01

    Reconstruction of 3D structure of an object from 2D views plays an important role in plastic surgery and orthopedics. This method doesn't need camera to do specific movements, such as translation or rotation independently. It only needs a hand-hold camera arbitrarily to take a few pictures, and apply the geometry relationship among the three views to obtain the projective reconstruction of the object. Then, it needs to introduce cheirality constraint in stratified reconstruction to determine the search area of the infinity plane, and finally achieve the camera intrinsic parameters calibration, and complete the metric reconstruction. This model has also been reconstructed with mouse and keyboard response coordinates to observe the model from different angles. Experiments with both pictures of object and face pictures show that the proposed method is very robust and accurate. PMID:23016433

  11. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  12. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the

  13. Evaluation of guidewire path reproducibility.

    PubMed

    Schafer, Sebastian; Hoffmann, Kenneth R; Noël, Peter B; Ionita, Ciprian N; Dmochowski, Jacek

    2008-05-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters approximately 4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (512(3), 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17 +/- 0.02 mm (plastic-coated guidewire), 0.73 +/- 0.55 mm (steel guidewire) and 1.15 +/- 0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position.

  14. HONEY -- The Honeywell Camera

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  15. WFOV star tracker camera

    SciTech Connect

    Lewis, I.T. ); Ledebuhr, A.G.; Axelrod, T.S.; Kordas, J.F.; Hills, R.F. )

    1991-04-01

    A prototype wide-field-of-view (WFOV) star tracker camera has been fabricated and tested for use in spacecraft navigation. The most unique feature of this device is its 28{degrees} {times} 44{degrees} FOV, which views a large enough sector of the sky to ensure the existence of at least 5 stars of m{sub v} = 4.5 or brighter in all viewing directions. The WFOV requirement and the need to maximize both collection aperture (F/1.28) and spectral input band (0.4 to 1.1 {mu}m) to meet the light gathering needs for the dimmest star have dictated the use of a novel concentric optical design, which employs a fiber optic faceplate field flattener. The main advantage of the WFOV configuration is the smaller star map required for position processing, which results in less processing power and faster matching. Additionally, a size and mass benefit is seen with a larger FOV/smaller effective focal length (efl) sensor. Prototype hardware versions have included both image intensified and un-intensified CCD cameras. Integration times of {le} 50 msec have been demonstrated with both the intensified and un-intensified versions. 3 refs., 16 figs.

  16. NFC - Narrow Field Camera

    NASA Astrophysics Data System (ADS)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  17. PAU camera: detectors characterization

    NASA Astrophysics Data System (ADS)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  18. MEMS digital camera

    NASA Astrophysics Data System (ADS)

    Gutierrez, R. C.; Tang, T. K.; Calvet, R.; Fossum, E. R.

    2007-02-01

    MEMS technology uses photolithography and etching of silicon wafers to enable mechanical structures with less than 1 μm tolerance, important for the miniaturization of imaging systems. In this paper, we present the first silicon MEMS digital auto-focus camera for use in cell phones with a focus range of 10 cm to infinity. At the heart of the new silicon MEMS digital camera, a simple and low-cost electromagnetic actuator impels a silicon MEMS motion control stage on which a lens is mounted. The silicon stage ensures precise alignment of the lens with respect to the imager, and enables precision motion of the lens over a range of 300 μm with < 5 μm hysteresis and < 2 μm repeatability. Settling time is < 15 ms for 200 μm step, and < 5ms for 20 μm step enabling AF within 0.36 sec at 30 fps. The precise motion allows COTS optics to maintain MTF > 0.8 at 20 cy/mm up to 80% field over the full range of motion. Accelerated lifetime testing has shown that the alignment and precision of motion is maintained after 8,000 g shocks, thermal cycling from - 40 C to 85 C, and operation over 20 million cycles.

  19. Stereoscopic camera design

    NASA Astrophysics Data System (ADS)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  20. Investigation of a consumer-grade digital stereo camera

    NASA Astrophysics Data System (ADS)

    Menna, Fabio; Nocerino, Erica; Remondino, Fabio; Shortis, Mark

    2013-04-01

    The paper presents a metric investigation of the Fuji FinePix Real 3D W1 stereo photo-camera. The stereo-camera uses a synchronized Twin Lens-CCD System to acquire simultaneously two images using two Fujinon 3x optical zoom lenses arranged in an aluminum die-cast frame integrated in a very compact body. The nominal baseline is 77 mm and the resolution of the each CCD is 10 megapixels. Given the short baseline and the presence of two optical paths, the investigation aims to evaluate the accuracy of the 3D data that can be produced and the stability of the camera. From a photogrammetric point of view, the interest in this camera is its capability to acquire synchronized image pairs that contain important 3D metric information for many close-range applications (human body parts measurement, rapid prototyping, surveying of archeological artifacts, etc.). Calibration values - for the left and right cameras - at different focal lengths, derived with an in-house software application, are reported together with accuracy analyses. The object coordinates obtained from the bundle adjustment computation for each focal length were compared to reference coordinates of a test range by means of a similarity transformation. Additionally, the article reports on the investigation of the asymmetrical relative orientation between the left and right camera.

  1. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  2. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  3. Fast frame scanning camera system for light-sheet microscopy.

    PubMed

    Wu, Di; Zhou, Xing; Yao, Baoli; Li, Runze; Yang, Yanlong; Peng, Tong; Lei, Ming; Dan, Dan; Ye, Tong

    2015-10-10

    In the interest of improving the temporal resolution for light-sheet microscopy, we designed a fast frame scanning camera system that incorporated a galvanometer scanning mirror into the imaging path of a home-built light-sheet microscope. This system transformed a temporal image sequence to a spatial one so that multiple images could be acquired during one exposure period. The improvement factor of the frame rate was dependent on the number of sub-images that could be tiled on the sensor without overlapping each other and was therefore a trade-off with the image size. As a demonstration, we achieved 960 frames/s (fps) on a CCD camera that was originally capable of recording images at only 30 fps (full frame). This allowed us to observe millisecond or sub-millisecond events with ordinary CCD cameras.

  4. Fast frame scanning camera system for light-sheet microscopy.

    PubMed

    Wu, Di; Zhou, Xing; Yao, Baoli; Li, Runze; Yang, Yanlong; Peng, Tong; Lei, Ming; Dan, Dan; Ye, Tong

    2015-10-10

    In the interest of improving the temporal resolution for light-sheet microscopy, we designed a fast frame scanning camera system that incorporated a galvanometer scanning mirror into the imaging path of a home-built light-sheet microscope. This system transformed a temporal image sequence to a spatial one so that multiple images could be acquired during one exposure period. The improvement factor of the frame rate was dependent on the number of sub-images that could be tiled on the sensor without overlapping each other and was therefore a trade-off with the image size. As a demonstration, we achieved 960 frames/s (fps) on a CCD camera that was originally capable of recording images at only 30 fps (full frame). This allowed us to observe millisecond or sub-millisecond events with ordinary CCD cameras. PMID:26479797

  5. Toward the design of a positron volume imaging camera

    SciTech Connect

    Rogers, J.G.; Stazyk, M.; Harrop, R.; Dykstra, C.J.; Barney, J.S.; Atkins, M.S.; Kinahan, P.E. )

    1990-04-01

    Three different computing algorithms for performing positron emission image reconstruction have been compared using Monte Carlo phantom simulations. The work was motivated by the recent announcement of the commercial availability of a positron volume imaging camera which has improved axial (slice) resolution and retractable interslice septa. The simulations demonstrate the importance of developing a complete three-dimensional reconstruction algorithm to deal with the increased gamma detection solid angle and the increased scatter fraction that result when the interslice septa are removed from a ring tomograph.

  6. Oblique along path toward structures at rear of parcel. Original ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique along path toward structures at rear of parcel. Original skinny mosaic path along edge of structures was altered (delineation can be seen in concrete) path was widened with a newer mosaic to make access to the site safer. Structures (from right) edge of Round House (with "Spring Garden"), Pencil house, Shell House, School House, wood lattice is attached to chain-link fence along north (rear) property line. These structures were all damaged by the 1994 Northridge earthquake. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  7. Camera Calibration for Uav Application Using Sensor of Mobile Camera

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Chikatsu, H.

    2015-05-01

    Recently, 3D measurements using small unmanned aerial vehicles (UAVs) have increased in Japan, because small type UAVs is easily available at low cost and the analysis software can be created the easily 3D models. However, small type UAVs have a problem: they have very short flight times and a small payload. In particular, as the payload of a small type UAV increases, its flight time decreases. Therefore, it is advantageous to use lightweight sensors in small type UAVs. A mobile camera is lightweight and has many sensors such as an accelerometer, a magnetic field, and a gyroscope. Moreover, these sensors can be used simultaneously. Therefore, the authors think that the problems of small UAVs can be solved using the mobile camera. The authors executed camera calibration using a test target for evaluating sensor values measured using a mobile camera. Consequently, the authors confirmed the same accuracy with normal camera calibration.

  8. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  9. Dynamic Human Body Modeling Using a Single RGB Camera

    PubMed Central

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  10. Dynamic Human Body Modeling Using a Single RGB Camera.

    PubMed

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  11. Dynamic Human Body Modeling Using a Single RGB Camera.

    PubMed

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  12. Observation of Marine Animals Using Underwater Acoustic Camera

    NASA Astrophysics Data System (ADS)

    Iida, Kohji; Takahashi, Rika; Tang, Yong; Mukai, Tohru; Sato, Masanori

    2006-05-01

    An underwater acoustic camera enclosed in a pressure-resistant case was constructed to observe underwater marine animals. This enabled the measurement of the size, shape, and behavior of living marine animals in the detection range up to 240 cm. The transducer array of the acoustic camera was driven by 3.5 MHz ultrasonic signals, and B-mode acoustic images were obtained. Observations were conducted for captive animals in a water tank and for natural animals in a field. The captive animals, including fish, squid and jellyfish, were observed, and a three-dimensional internal structure of animals was reconstructed using multiple acoustical images. The most important contributors of acoustic scattering were the swimbladder and vertebra of bladdered fish, and the liver and reproductive organs of invertebrate animals. In a field experiment, the shape, size, and swimming behavior of wild animals were observed. The possibilities and limitations of the underwater acoustic camera for fishery applications were discussed.

  13. Single-camera, three-dimensional particle tracking velocimetry.

    PubMed

    Peterson, Kevin; Regaard, Boris; Heinemann, Stefan; Sick, Volker

    2012-04-01

    This paper introduces single-camera, three-dimensional particle tracking velocimetry (SC3D-PTV), an image-based, single-camera technique for measuring 3-component, volumetric velocity fields in environments with limited optical access, in particular, optically accessible internal combustion engines. The optical components used for SC3D-PTV are similar to those used for two-camera stereoscopic-µPIV, but are adapted to project two simultaneous images onto a single image sensor. A novel PTV algorithm relying on the similarity of the particle images corresponding to a single, physical particle produces 3-component, volumetric velocity fields, rather than the 3-component, planar results obtained with stereoscopic PIV, and without the reconstruction of an instantaneous 3D particle field. The hardware and software used for SC3D-PTV are described, and experimental results are presented. PMID:22513613

  14. Aircraft path planning for optimal imaging using dynamic cost functions

    NASA Astrophysics Data System (ADS)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  15. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  16. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  17. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  18. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  19. Multi-PSPMT scintillation camera

    SciTech Connect

    Pani, R.; Pellegrini, R.; Trotta, G.; Scopinaro, F.; Soluri, A.; Vincentis, G. de; Scafe, R.; Pergola, A.

    1999-06-01

    Gamma ray imaging is usually accomplished by the use of a relatively large scintillating crystal coupled to either a number of photomultipliers (PMTs) (Anger Camera) or to a single large Position Sensitive PMT (PSPMT). Recently the development of new diagnostic techniques, such as scintimammography and radio-guided surgery, have highlighted a number of significant limitations of the Anger camera in such imaging procedures. In this paper a dedicated gamma camera is proposed for clinical applications with the aim of improving image quality by utilizing detectors with an appropriate size and shape for the part of the body under examination. This novel scintillation camera is based upon an array of PSPMTs (Hamamatsu R5900-C8). The basic concept of this camera is identical to the Anger Camera with the exception of the substitution of PSPMTs for the PMTs. In this configuration it is possible to use the high resolution of the PSPMTs and still correctly position events lying between PSPMTs. In this work the test configuration is a 2 by 2 array of PSPMTs. Some advantages of this camera are: spatial resolution less than 2 mm FWHM, good linearity, thickness less than 3 cm, light weight, lower cost than equivalent area PSPMT, large detection area when coupled to scintillating arrays, small dead boundary zone (< 3 mm) and flexibility in the shape of the camera.

  20. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  1. The "All Sky Camera Network"

    ERIC Educational Resources Information Center

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites. Meteorites have great…

  2. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  3. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  4. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  5. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images. PMID:26560916

  6. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  7. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  8. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  9. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  10. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  11. CCD Camera Observations

    NASA Astrophysics Data System (ADS)

    Buchheim, Bob; Argyle, R. W.

    One night late in 1918, astronomer William Milburn, observing the region of Cassiopeia from Reverend T.H.E.C. Espin's observatory in Tow Law (England), discovered a hitherto unrecorded double star (Wright 1993). He reported it to Rev. Espin, who measured the pair using his 24-in. reflector: the fainter star was 6.0 arcsec from the primary, at position angle 162.4 ^{circ } (i.e. the fainter star was south-by-southeast from the primary) (Espin 1919). Some time later, it was recognized that the astrograph of the Vatican Observatory had taken an image of the same star-field a dozen years earlier, in late 1906. At that earlier epoch, the fainter star had been separated from the brighter one by only 4.8 arcsec, at position angle 186.2 ^{circ } (i.e. almost due south). Were these stars a binary pair, or were they just two unrelated stars sailing past each other? Some additional measurements might have begun to answer this question. If the secondary star was following a curved path, that would be a clue of orbital motion; if it followed a straight-line path, that would be a clue that these are just two stars passing in the night. Unfortunately, nobody took the trouble to re-examine this pair for almost a century, until the 2MASS astrometric/photometric survey recorded it in late 1998. After almost another decade, this amateur astronomer took some CCD images of the field in 2007, and added another data point on the star's trajectory, as shown in Fig. 15.1.

  12. Lights, Camera, Courtroom? Should Trials Be Televised?

    ERIC Educational Resources Information Center

    Kirtley, Jane E.; Brothers, Thomas W.; Veal, Harlan K.

    1999-01-01

    Presents three differing perspectives from American Bar Association members on whether television cameras should be allowed in the courtroom. Contends that cameras should be allowed with differing degrees of certainty: cameras truly open the courts to the public; cameras must be strategically placed; and cameras should be used only with the…

  13. Proportional counter radiation camera

    DOEpatents

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  14. Camera sensitivity study

    NASA Astrophysics Data System (ADS)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  15. The universal path integral

    NASA Astrophysics Data System (ADS)

    Lloyd, Seth; Dreyer, Olaf

    2016-02-01

    Path integrals calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness. The probabilities for events corresponding to sub-integrals can be calculated using the method of decoherent histories. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures.

  16. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  17. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  18. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  19. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  20. Characterizing the Evolutionary Path(s) to Early Homo

    PubMed Central

    Schroeder, Lauren; Roseman, Charles C.; Cheverud, James M.; Ackermann, Rebecca R.

    2014-01-01

    Numerous studies suggest that the transition from Australopithecus to Homo was characterized by evolutionary innovation, resulting in the emergence and coexistence of a diversity of forms. However, the evolutionary processes necessary to drive such a transition have not been examined. Here, we apply statistical tests developed from quantitative evolutionary theory to assess whether morphological differences among late australopith and early Homo species in Africa have been shaped by natural selection. Where selection is demonstrated, we identify aspects of morphology that were most likely under selective pressure, and determine the nature (type, rate) of that selection. Results demonstrate that selection must be invoked to explain an Au. africanus—Au. sediba—Homo transition, while transitions from late australopiths to various early Homo species that exclude Au. sediba can be achieved through drift alone. Rate tests indicate that selection is largely directional, acting to rapidly differentiate these taxa. Reconstructions of patterns of directional selection needed to drive the Au. africanus—Au. sediba—Homo transition suggest that selection would have affected all regions of the skull. These results may indicate that an evolutionary path to Homo without Au. sediba is the simpler path and/or provide evidence that this pathway involved more reliance on cultural adaptations to cope with environmental change. PMID:25470780

  1. Color gamma camera system for radiation monitoring

    NASA Astrophysics Data System (ADS)

    Mu, Zhiping; Deng, Jingkang; Wang, Yanfeng

    2000-11-01

    Radiation monitoring systems are desired in many places where radioactive materials are utilized. In this paper, a color gamma camera system developed in Tsinghua University (P.C. China) is reported. The system consist of a compact X - (gamma) ray detector system, a single hole collimator, the scanning mechanism and computer system. The MLEM method is implemented for image reconstruction, which enables one to generate images of high resolution with relatively big aperture. With the associated software, several scanning modes, which work with different speeds and resolutions, are provided and can be selected in the operations. In addition, the system can detect radioactive sources emitting rays of different energies and display them with color images. Experiments were made using Am-241 (59.5 KeV) and Na-22 (511 KeV) to test the performance of the system. The results are presented which show that the resolution of this system can be as high as 1.5 degrees. Furthermore, simulations using Matlab were made to examine the capability of imaging point sources with a small number of counts and imaging distributed sources. Promising results were obtained and are reported. Discussions about camera design and further improvements are given at the end.

  2. The infrared camera onboard JEM-EUSO

    NASA Astrophysics Data System (ADS)

    Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.

    2015-11-01

    The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented.

  3. Detection of the optimal region of interest for camera oximetry.

    PubMed

    Karlen, Walter; Ansermino, J Mark; Dumont, Guy A; Scheffer, Cornie

    2013-01-01

    The estimation of heart rate and blood oxygen saturation with an imaging array on a mobile phone (camera oximetry) has great potential for mobile health applications as no additional hardware other than a camera and LED flash enabled phone are required. However, this approach is challenging as the configuration of the camera can negatively influence the estimation quality. Further, the number of photons recorded with the photo detector is largely dependent on the optical path length, resulting in a non-homogeneous image. In this paper we describe a novel method to automatically detect the optimal region of interest (ROI) for the captured image to extract a pulse waveform. We also present a study to select the optimal camera settings, notably the white balance. The experiments show that the incandescent white balance mode is the preferable setting for camera oximetry applications on the tested mobile phone (Samsung Galaxy Ace). Also, the ROI algorithm successfully identifies the frame regions which provide waveforms with the largest amplitudes. PMID:24110175

  4. Dual-illumination planar Doppler velocimetry using a single camera

    NASA Astrophysics Data System (ADS)

    Charrett, Tom O. H.; Ford, Helen D.; Nobes, David S.; Tatam, Ralph P.

    2003-11-01

    A Planar Doppler Velocimetry (PDV) illumination system has been designed which is able to generate two beams, separated in frequency by about 600 MHz. This allows a common-path imaging head to be constructed, using a single imaging camera instead of the usual camera pair. Both illumination beams can be derived from a single laser, using acousto-optic modulators to effect the frequency shifts. One illumination frequency lies on an absorption line of gaseous iodine, and the other just off the absorption line. The beams sequentially illuminate a plane within a seeded flow and Doppler-shifted scattered light passes through an iodine vapor cell onto the camera. The beam that lies at an optical frequency away from the absorption line is not affected by passage through the cell, and provides a reference image. The other beam, the frequency of which coincides with an absorption line, encodes the velocity information as a variation in transmission dependent upon the Doppler shift. Images of the flow under both illumination frequencies are formed on the same camera, ensuring registration of the reference and signal images. This removes a major problem of a two-camera imaging head, and cost efficiency is also improved by the simplification of the system. The dual illumination technique has been shown to operate successfully with a spinning disc as a test object. The benefits of combining the dual illumination system with a three-component, fiber-linked imaging head developed at Cranfield will be discussed.

  5. Spectral image reconstruction through the PCA transform

    NASA Astrophysics Data System (ADS)

    Ma, Long; Qiu, Xuewei; Cong, Yangming

    2015-12-01

    Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.

  6. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  7. Collision-related Early Paleozoic evolution of a crustal fragment from the northern Gondwana margin (Slavonian Mountains, Tisia Mega-Unit, Croatia): Reconstruction of the P-T path, timing and paleotectonic implications

    NASA Astrophysics Data System (ADS)

    Balen, D.; Massonne, H.-J.; Petrinec, Z.

    2015-09-01

    An orthogneiss from the oldest metamorphic complex at Mt. Papuk (Tisia Mega-Unit, Croatia) enables the quantification of the P-T evolution of Early Paleozoic rocks of the Panonian Basin basement in contrast to neighboring peri-Gondwanan terrains which are significantly overprinted by pre-Variscan, Variscan, and Alpine events. Two different groups of Ce-rich monazite within oval-shaped corona microstructures have been observed. Age dating of the corona cores yielded two populations with average ages of 528 ± 7 (2σ) Ma and 465 ± 7 Ma, respectively. Furthermore, an Y-rich group, found inside garnet cores, was dated at 616 ± 23 Ma. Th-rich monazite included in garnet rims yielded an age of 491 ± 6 Ma. The youngest monazite group at 417 ± 20 Ma is located inside mica. The orthogneiss precursor was a calc-alkaline to high-K calc-alkaline igneous peraluminous crustal rock (diorite) from an active continental marginal setting. The calculated P-T pseudosection in the MnNCKFMASHTO system in combination with assemblage characteristics and mineral chemistry data provides good constraints on the P-T evolution: for stage I peak P-T conditions of 13 kbar and 670 °C were derived followed by stage II, which was characterized by moderate cooling accompanied by uplift to mid-crustal levels (5.2 kbar and 610 °C). Subsequently, the system cooled to 480 °C at ~ 4.4 kbar (stage III). Formation of titanite rims on ilmenite suggests further cooling to 4 kbar and 400 °C (stage IV). The clockwise P-T path implies exhumation from a tectonically thickened crustal setting (ca. 45 km depth at a geothermal gradient of ~ 15 °C/km) to mid-crustal levels (ca. 18 km) followed by cooling at depths < 14 km. Crustal thickening was due to the collision of a continental plate (Gondwana) with a smaller plate, which was underthrust.

  8. Hi-G electronic gated camera for precision trajectory analysis

    NASA Astrophysics Data System (ADS)

    Snyder, Donald R.; Payne, Scott; Keller, Ed; Longo, Salvatore; Caudle, Dennis E.; Walker, Dennis C.; Sartor, Mark A.; Keeler, Joe E.; Kerr, David A.; Fail, R. Wallace; Gannon, Jim; Carrol, Ernie; Jamison, Todd A.

    1997-12-01

    trajectory, timing, and advanced sensor development. This system will be used for ground tracking data reduction in support of small air vehicle and munition testing. It will provide a means of integrating the imagery and telemetry data from the item with ground based photographic support. The technique we have designed will exploit off-the-shelf software and analysis components. A differential GPS survey instrument will establish a photogrammetric calibration grid throughout the range and reference targets along the flight path. Images from the on-board sensor will be used to calibrate the ortho- rectification model in the analysis software. The projectile images will be transmitted and recorded on several tape recorders to insure complete capture of each video field. The images will be combined with a non-linear video editor into a time-correlated record. Each correlated video field will be written to video disk. The files will be converted to DMA compatible format and then analyzed for determination of the projectile altitude, attitude and position in space. The resulting data file will be used to create a photomosaic of the ground the projectile flew over and the targets it saw. The data will be then transformed to a trajectory file and used to generate a graphic overlay that will merge digital photo data of the range with actual images captured. The plan is to superimpose the flight path of the projectile, the path of the weapons aimpoint, and annotation of each internal sequence event. With tools used to produce state-of-the-art computer graphics, we now think it will be possible to reconstruct the test event from the viewpoint of the warhead, the target, and a 'God's-Eye' view looking over the shoulder of the projectile.

  9. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  10. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data. PMID:24051846

  11. Convolutional Sparse Coding for Trajectory Reconstruction.

    PubMed

    Zhu, Yingying; Lucey, Simon

    2015-03-01

    Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's "true" 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an l1 inspired objective for trajectory reconstruction that is able to "adaptively" select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.

  12. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  13. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  14. Speckle Camera Imaging of the Planet Pluto

    NASA Astrophysics Data System (ADS)

    Howell, Steve B.; Horch, Elliott P.; Everett, Mark E.; Ciardi, David R.

    2012-10-01

    We have obtained optical wavelength (692 nm and 880 nm) speckle imaging of the planet Pluto and its largest moon Charon. Using our DSSI speckle camera attached to the Gemini North 8 m telescope, we collected high resolution imaging with an angular resolution of ~20 mas, a value at the Gemini-N telescope diffraction limit. We have produced for this binary system the first speckle reconstructed images, from which we can measure not only the orbital separation and position angle for Charon, but also the diameters of the two bodies. Our measurements of these parameters agree, within the uncertainties, with the current best values for Pluto and Charon. The Gemini-N speckle observations of Pluto are presented to illustrate the capabilities of our instrument and the robust production of high accuracy, high spatial resolution reconstructed images. We hope our results will suggest additional applications of high resolution speckle imaging for other objects within our solar system and beyond. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  15. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  16. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels. PMID:27534480

  17. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  18. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  19. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  20. Astronomy and the camera obscura

    NASA Astrophysics Data System (ADS)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  1. Picosecond (picoframe) framing camera evaluations.

    PubMed

    Liu, Y; Sibbett, W; Walker, D R

    1992-03-01

    Detailed theoretical evaluations of picoframe-I- and II-type framing cameras are presented, and predicted performance characteristics are compared with experimental results. The methods of theoretical simulations are described, and a suite of computer programs was developed. The theoretical analyses indicate that the existence of fringe fields in the vicinity of the deflectors is the main factor that limits the dynamic spatial resolutions and frame times of these particular designs of framing camera, and possible refinements are outlined. PMID:20720702

  2. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    , and more detailed data on a larger number of clinical and experimental PET scans will be necessary for definitive evaluation. Nevertheless, the HIDAC positron camera may be used for clinical PET imaging in well-defined patient cases, particularly in situations where both high spatial resolution is desired in the reconstructed image of the examined pathological condition and at the same time "static" PET imaging may be adequate, as is the case in thyroid-, ENT- and liver tomographic imaging using the HIDAC positron camera.

  3. Project Reconstruct.

    ERIC Educational Resources Information Center

    Helisek, Harriet; Pratt, Donald

    1994-01-01

    Presents a project in which students monitor their use of trash, input and analyze information via a database and computerized graphs, and "reconstruct" extinct or endangered animals from recyclable materials. The activity was done with second-grade students over a period of three to four weeks. (PR)

  4. Vaginal reconstruction

    SciTech Connect

    Lesavoy, M.A.

    1985-05-01

    Vaginal reconstruction can be an uncomplicated and straightforward procedure when attention to detail is maintained. The Abbe-McIndoe procedure of lining the neovaginal canal with split-thickness skin grafts has become standard. The use of the inflatable Heyer-Schulte vaginal stent provides comfort to the patient and ease to the surgeon in maintaining approximation of the skin graft. For large vaginal and perineal defects, myocutaneous flaps such as the gracilis island have been extremely useful for correction of radiation-damaged tissue of the perineum or for the reconstruction of large ablative defects. Minimal morbidity and scarring ensue because the donor site can be closed primarily. With all vaginal reconstruction, a compliant patient is a necessity. The patient must wear a vaginal obturator for a minimum of 3 to 6 months postoperatively and is encouraged to use intercourse as an excellent obturator. In general, vaginal reconstruction can be an extremely gratifying procedure for both the functional and emotional well-being of patients.

  5. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  6. An industrial light-field camera applied for 3D velocity measurements in a slot jet

    NASA Astrophysics Data System (ADS)

    Seredkin, A. V.; Shestakov, M. V.; Tokarev, M. P.

    2016-10-01

    Modern light-field cameras have found their application in different areas like photography, surveillance and quality control in industry. A number of studies have been reported relatively low spatial resolution of 3D profiles of registered objects along the optical axis of the camera. This article describes a method for 3D velocity measurements in fluid flows using an industrial light-field camera and an alternative reconstruction algorithm based on a statistical approach. This method is more accurate than triangulation when applied for tracking small registered objects like tracer particles in images. The technique was used to measure 3D velocity fields in a turbulent slot jet.

  7. Measurements of the performance of the light mixing chambers in the mixel camera.

    PubMed

    Fridman, Andrei; Høye, Gudrun

    2015-05-18

    Spectral data acquired with traditional push-broom hyperspectral cameras may be significantly distorted due to spatial misregistration such as keystone. The mixel camera is a new type of push-broom hyperspectral camera, where an image recorded with arbitrary (even large) keystone is reconstructed to a nearly keystone-free image. The key component of the mixel camera is an array of light mixing chambers in the slit plane, and the precision of the image reconstruction depends on the light mixing properties of these chambers. In this work we describe how these properties were measured in a mixel camera prototype. We also investigate the potential performance of the mixel camera in terms of spatial co-registration, based on the measured response of the mixing chambers to a point source. The results suggest that, with the current chambers, a perfectly characterized mixel camera should have residual spatial misregistration that is equivalent to 0.02-0.03 pixels keystone. This compares favorably to high resolution instruments where keystone is corrected in hardware or by resampling.

  8. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  9. Hologram synthesis of three-dimensional real objects using portable integral imaging camera.

    PubMed

    Lee, Sung-Keun; Hong, Sung-In; Kim, Yong-Soo; Lim, Hong-Gi; Jo, Na-Young; Park, Jae-Hyeung

    2013-10-01

    We propose a portable hologram capture system based on integral imaging. An integral imaging camera with an integrated micro lens array captures spatio-angular light ray distribution of the three-dimensional scene under incoherent illumination. The captured light ray distribution is then processed to synthesize corresponding hologram. Experimental results show that the synthesized hologram is optically reconstructed successfully, demonstrating accommodation and motion parallax of the reconstructed three-dimensional scene.

  10. A testbed for wide-field, high-resolution, gigapixel-class cameras.

    PubMed

    Kittle, David S; Marks, Daniel L; Son, Hui S; Kim, Jungsang; Brady, David J

    2013-05-01

    The high resolution and wide field of view (FOV) of the AWARE (Advanced Wide FOV Architectures for Image Reconstruction and Exploitation) gigapixel class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE system integrates an array of micro-cameras in a multiscale design to achieve gigapixel sampling at video rates. Alignment and optical testing of the micro-cameras is vital in compositing engines, which require pixel-level accurate mappings over the entire array of cameras. A testbed has been developed to automatically calibrate and measure the optical performance of the entire camera array. This testbed utilizes translation and rotation stages to project a ray into any micro-camera of the AWARE system. A spatial light modulator is projected through a telescope to form an arbitrary object space pattern at infinity. This collimated source is then reflected by an elevation stage mirror for pointing through the aperture of the objective into the micro-optics and eventually the detector of the micro-camera. Different targets can be projected with the spatial light modulator for measuring the modulation transfer function (MTF) of the system, fiducials in the overlap regions for registration and compositing, distortion mapping, illumination profiles, thermal stability, and focus calibration. The mathematics of the testbed mechanics are derived for finding the positions of the stages to achieve a particular incident angle into the camera, along with calibration steps for alignment of the camera and testbed coordinate axes. Measurement results for the AWARE-2 gigapixel camera are presented for MTF, focus calibration, illumination profile, fiducial mapping across the micro-camera for registration and distortion correction, thermal stability, and alignment of the camera on the testbed.

  11. A testbed for wide-field, high-resolution, gigapixel-class cameras

    NASA Astrophysics Data System (ADS)

    Kittle, David S.; Marks, Daniel L.; Son, Hui S.; Kim, Jungsang; Brady, David J.

    2013-05-01

    The high resolution and wide field of view (FOV) of the AWARE (Advanced Wide FOV Architectures for Image Reconstruction and Exploitation) gigapixel class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE system integrates an array of micro-cameras in a multiscale design to achieve gigapixel sampling at video rates. Alignment and optical testing of the micro-cameras is vital in compositing engines, which require pixel-level accurate mappings over the entire array of cameras. A testbed has been developed to automatically calibrate and measure the optical performance of the entire camera array. This testbed utilizes translation and rotation stages to project a ray into any micro-camera of the AWARE system. A spatial light modulator is projected through a telescope to form an arbitrary object space pattern at infinity. This collimated source is then reflected by an elevation stage mirror for pointing through the aperture of the objective into the micro-optics and eventually the detector of the micro-camera. Different targets can be projected with the spatial light modulator for measuring the modulation transfer function (MTF) of the system, fiducials in the overlap regions for registration and compositing, distortion mapping, illumination profiles, thermal stability, and focus calibration. The mathematics of the testbed mechanics are derived for finding the positions of the stages to achieve a particular incident angle into the camera, along with calibration steps for alignment of the camera and testbed coordinate axes. Measurement results for the AWARE-2 gigapixel camera are presented for MTF, focus calibration, illumination profile, fiducial mapping across the micro-camera for registration and distortion correction, thermal stability, and alignment of the camera on the testbed.

  12. Trajectory Generation and Path Planning for Autonomous Aerobots

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Kulczycki, Eric A.; Elfes, Alberto

    2007-01-01

    This paper presents global path planning algorithms for the Titan aerobot based on user defined waypoints in 2D and 3D space. The algorithms were implemented using information obtained through a planner user interface. The trajectory planning algorithms were designed to accurately represent the aerobot's characteristics, such as minimum turning radius. Additionally, trajectory planning techniques were implemented to allow for surveying of a planar area based solely on camera fields of view, airship altitude, and the location of the planar area's perimeter. The developed paths allow for planar navigation and three-dimensional path planning. These calculated trajectories are optimized to produce the shortest possible path while still remaining within realistic bounds of airship dynamics.

  13. Tortuous path chemical preconcentrator

    DOEpatents

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  14. A Path to Discovery

    ERIC Educational Resources Information Center

    Stegemoller, William; Stegemoller, Rebecca

    2004-01-01

    The path taken and the turns made as a turtle traces a polygon are examined to discover an important theorem in geometry. A unique tool, the Angle Adder, is implemented in the investigation. (Contains 9 figures.)

  15. CARTOGAM: a portable gamma camera

    NASA Astrophysics Data System (ADS)

    Gal, O.; Izac, C.; Lainé, F.; Nguyen, A.

    1997-02-01

    The gamma camera is devised to establish the cartography of radioactive sources against a visible background in quasi real time. This device is designed to spot sources from a distance during the preparation of interventions on active areas of nuclear installations. This implement will permit to optimize interventions especially on the dosimetric level. The camera consists of a double cone collimator, a scintillator and an intensified CCD camera. This chain of detection provides the formation of both gamma images and visible images. Even though it is wrapped in a denal shield, the camera is still portable (mass < 15 kg) and compact (external diameter = 8 cm). The angular resolution is of the order of one degree for gamma rays of 1 MeV. In a few minutes, the device is able to measure a dose rate of 10 μGy/h delivered for instance by a source of 60Co of 90 mCi located at 10 m from the detector. The first images recorded in the laboratory will be presented and will illustrate the performances obtained with this camera.

  16. The Clementine longwave infrared camera

    SciTech Connect

    Priest, R.E.; Lewis, I.T.; Sewall, N.R.; Park, H.S.; Shannon, M.J.; Ledebuhr, A.G.; Pleasance, L.D.; Massie, M.A.; Metschuleit, K.

    1995-04-01

    The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared regions. More than 1.7 million images of the moon, earth and space were returned from this mission. The longwave-infrared (LWIR) camera supplemented the UV/Visible and near-infrared mapping cameras providing limited strip coverage of the moon, giving insight to the thermal properties of the soils. This camera provided {approximately}100 m spatial resolution at 400 km periselene, and a 7 km across-track swath. This 2.1 kg camera using a 128 x 128 Mercury-Cadmium-Telluride (MCT) FPA viewed thermal emission of the lunar surface and lunar horizon in the 8.0 to 9.5 {micro}m wavelength region. A description of this light-weight, low power LWIR camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission`s primary objective for flight qualifying the sensors for future Department of Defense flights.

  17. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  18. Study of laser reflection of infrared cameras with germanium optics

    NASA Astrophysics Data System (ADS)

    Chiu, Patrio; Shih, Ishiang; Shi, S.; Laou, Philips

    2003-09-01

    Infrared cameras are widely used in today's battlefield for surveillance purpose. Because of retroreflection, an incident laser beam entering the camera optics results in a beam reflecting back to the direction of the laser source. An IR detector positioned close to the laser source can then detect the reflected beam. This effect can reveal the location of the cameras and thus increases the risk of covert operations. In the present work, the characteristics of the retroreflection is studied. It is found that the reflection intensity is high when the incident beam enters through the middle part of the lenses while it is low and the beam is diverged when entering through the outer part of the lenses. The reflection is symmetric when the incident beam is normal to the lenses while asymmetric when it is incident with an angle to the lenses. In order to study the potential effects on retroreflection of modified camera optics, IR low index slides (ZnSe and KCl with refractive indices of 2.49 and 1.54, respectively) with different thicknesses (2mm, 4mm and 6mm) are placed in the optical system. The result shows that the focal point of the lenses is changed by the addition of the slide but the optical paths of the reflection remain unchanged. The relationship between the different slides and beam intensity is also studied.

  19. Medium format cameras used by NASA astronauts

    NASA Technical Reports Server (NTRS)

    Amsbury, David; Bremer, Jeff

    1989-01-01

    The medium format cameras and other hardware used for photographing the earth from the Space Shuttle are discussed. Illustrations and descriptions are given for the two types of cameras used for most earth photography, the NASA-modified Hasselblad 500 EL/M 70-mm cameras and the Linhof AeroTechnika 45 camera. Also, the data recording modules used on Space Shuttle missions and a mounting device to produce simultaneous photography using two cameras are examined.

  20. System Architecture of the Dark Energy Survey Camera Readout Electronics

    SciTech Connect

    Shaw, Theresa; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Chappa, Steve; de Vicente, Juan; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; Martinez, Gustavo; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  1. The GISMO-2 Bolometer Camera

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  2. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586

  3. Dark Energy Camera for Blanco

    SciTech Connect

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  4. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  5. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  6. a Method for Self-Calibration in Satellite with High Precision of Space Linear Array Camera

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Qian, Fangming; Miao, Yuzhe; Wang, Rongjian

    2016-06-01

    At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera's change regulation can be mastered accurately and the camera's attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  7. [New medical imaging based on electron tracking Compton camera (ETCC)].

    PubMed

    Tanimori, Toru; Kubo, Hidetoshi; Kabuki, Shigeto; Kimura, Hiroyuki

    2012-01-01

    We have developed an Electron-Tracking Compton Camera (ETCC) for medical imaging due to its wide energy dynamic range (200-1,500keV) and wide field of view (FOV, 3 str). This camera has a potential of developing the new reagents. We have carried out several imaging reagent studies as examples; (1) 18F-FDG and 131I-MIBG simultaneous imaging for double clinical tracer imaging, (2) imaging of some minerals (Mn-54, Zn-65, Fe-59) in mouse and plants. In addition, ETCC has a potential of real-time monitoring of the Bragg peak location by imaging prompt gamma rays for the beam therapy. We carried out the water phantom experiment using 140MeV proton beam, and obtained the images of both 511 keV and high energy gamma rays (800-2,000keV). Here better correlation of the latter image to the Bragg peak has been observed. Another potential of ETCC is to reconstruct the 3D image using only one-head camera without rotations of both the target and camera. Good 3D images of the thyroid grant phantom and the mouse with tumor were observed. In order to advance those features to the practical use, we are improving the all components and then construct the multi-head ETCC system.

  8. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  9. Bayes Estimators for Phylogenetic Reconstruction

    PubMed Central

    Huggins, P. M.; Li, W.; Haws, D.; Friedrich, T.; Liu, J.; Yoshida, R.

    2011-01-01

    Tree reconstruction methods are often judged by their accuracy, measured by how close they get to the true tree. Yet, most reconstruction methods like maximum likelihood (ML) do not explicitly maximize this accuracy. To address this problem, we propose a Bayesian solution. Given tree samples, we propose finding the tree estimate that is closest on average to the samples. This “median” tree is known as the Bayes estimator (BE). The BE literally maximizes posterior expected accuracy, measured in terms of closeness (distance) to the true tree. We discuss a unified framework of BE trees, focusing especially on tree distances that are expressible as squared euclidean distances. Notable examples include Robinson–Foulds (RF) distance, quartet distance, and squared path difference. Using both simulated and real data, we show that BEs can be estimated in practice by hill-climbing. In our simulation, we find that BEs tend to be closer to the true tree, compared with ML and neighbor joining. In particular, the BE under squared path difference tends to perform well in terms of both path difference and RF distances. PMID:21471560

  10. NIR spectrophotometric system based on a conventional CCD camera

    NASA Astrophysics Data System (ADS)

    Vilaseca, Meritxell; Pujol, Jaume; Arjona, Montserrat

    2003-05-01

    The near infrared spectral region (NIR) is useful in many applications. These include agriculture, the food and chemical industry, and textile and medical applications. In this region, spectral reflectance measurements are currently made with conventional spectrophotometers. These instruments are expensive since they use a diffraction grating to obtain monochromatic light. In this work, we present a multispectral imaging based technique for obtaining the reflectance spectra of samples in the NIR region (800 - 1000 nm), using a small number of measurements taken through different channels of a conventional CCD camera. We used methods based on the Wiener estimation, non-linear methods and principal component analysis (PCA) to reconstruct the spectral reflectance. We also analyzed, by numerical simulation, the number and shape of the filters that need to be used in order to obtain good spectral reconstructions. We obtained the reflectance spectra of a set of 30 spectral curves using a minimum of 2 and a maximum of 6 filters under the influence of two different halogen lamps with color temperatures Tc1 = 2852K and Tc2 = 3371K. The results obtained show that using between three and five filters with a large spectral bandwidth (FWHM = 60 nm), the reconstructed spectral reflectance of the samples was very similar to that of the original spectrum. The small amount of errors in the spectral reconstruction shows the potential of this method for reconstructing spectral reflectances in the NIR range.

  11. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  12. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  13. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot.

    PubMed

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J

    2014-10-01

    Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design.

  14. Combined Intra- and Extra-articular Reconstruction of the Anterior Cruciate Ligament: The Reconstruction of the Knee Anterolateral Ligament

    PubMed Central

    Helito, Camilo Partezani; Bonadio, Marcelo Batista; Gobbi, Riccardo Gomes; da Mota e Albuquerque, Roberto Freire; Pécora, José Ricardo; Camanho, Gilberto Luis; Demange, Marco Kawamura

    2015-01-01

    We present a new technique for the combined intra- and extra-articular reconstruction of the anterior cruciate ligament. Intra-articular reconstruction is performed in an outside-in manner according to the precepts of the anatomic femoral tunnel technique. Extra-articular reconstruction is performed with the gracilis tendon while respecting the anatomic parameters of the origin and insertion points and the path described for the knee anterolateral ligament. PMID:26258037

  15. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  16. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras.

    PubMed

    Kristoffersen, Miklas S; Dueholm, Jacob V; Gade, Rikke; Moeslund, Thomas B

    2016-01-05

    The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.

  17. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras.

    PubMed

    Kristoffersen, Miklas S; Dueholm, Jacob V; Gade, Rikke; Moeslund, Thomas B

    2016-01-01

    The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences. PMID:26742047

  18. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    PubMed Central

    Kristoffersen, Miklas S.; Dueholm, Jacob V.; Gade, Rikke; Moeslund, Thomas B.

    2016-01-01

    The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences. PMID:26742047

  19. Linearisation of RGB Camera Responses for Quantitative Image Analysis of Visible and UV Photography: A Comparison of Two Techniques

    PubMed Central

    Garcia, Jair E.; Dyer, Adrian G.; Greentree, Andrew D.; Spring, Gale; Wilksch, Philip A.

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  20. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be

  1. Compact large FoV gamma camera for breast molecular imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Cinti, M. N.; Pellegrini, R.; Betti, M.; Devincentis, G.; Bennati, P.; Ridolfi, S.; Iurlaro, G.; Montani, L.; Scafè, R.; Marini, M.; Porfiri, L. M.; Giachetti, G.; Baglini, F.; Salvadori, G.; Madesani, M.; Pieracci, M.; Catarsi, F.; Bigongiari, A.

    2006-12-01

    The very low sensitivity of scintimammography for tumours under 1 cm in diameter, with current nuclear medicine cameras, is the major limitation in recommending this test modality for screening purposes. To improve this diagnostic technique,a new concept of scintillation gamma camera, which fits the best requirements for functional breast imaging has been developed under "Integrated Mammographic Imaging" (IMI) project. This camera consists of a large detection head (6″×7″),very compact sized and with light weight to be easily positioned in the same X-ray geometry. The detection head consists of matrix of 42 photodetector Hamamatsu 1 in 2 square H8520-C12 PSPMTs, which are closely packed and coupled to a NaI(Tl) scintillating array, with individual crystal pixel 2×2×6 mm 3 size. Large FoV camera shows a very good pixel identification in the detection dead zones between tubes allowing an accurate LUT correction of the final image reconstruction. Electronic read-out was especially designed to optimize the intrinsic spatial resolution and camera compactness. With respect to Anger camera, the overall spatial resolution is improved up to 40% while the overall energy resolution values is ˜16% at 140 keV. Large FoV dedicated camera was characterized and tested by phantom studies; and clinical trials are currently performed. For all patients, compression views have been acquiring for both breasts in craniocaudal projections, and are compared with standard gamma camera images.

  2. The European Fireball Network 2010 - Status und Results of Cameras in Germany

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Heinlein, D.; Grau, T.; Flohrer, J.

    2011-10-01

    The European Fireball Network (EN) has been continuously operating since 1966 (Fig. 1). Beginning in 1995, observing stations in Germany have been operated by the DLR Institute of Planetary Research. The stations in Germany are of the classical type, consisting of cameras on a tripod, looking down and taking images of a spherical mirror. Rotating shutters mounted in front of the camera lens provide velocity information for the fast-moving meteors. Cameras are equipped with film. Typically, one long-exposure image is taken every night. In 2010, 15 cameras were in regular operation. 36 fireballs on 82 photographs could be recorded, representing average "fireball yield". Fireball coregistrations could be made with other EN stations in 20 cases, and in 3 cases with other camera types. Data reduction and orbit reconstruction (carried out at Ondřejov Observatory, P. Spurný and team) was possible for 1 meteor. The brightest meteor that was recorded in 2010 had a magnitude of -13. Progress has been made in development of a prototype digital camera version. Quite remarkably, in the area monitored by the cameras, 2 meteorite falls were recovered mainly using eyewitness reports to guide the meteorite search. Due to weather and daylight hours, no images from the cameras could be obtained. This contribution will describe the activities and results of 2010.

  3. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  4. Angular sensitivity of gated microchannel plate framing cameras

    SciTech Connect

    Landen, O. L.; Lobban, A.; Tutt, T.; Bell, P. M.; Costa, R.; Hargrove, D. R.; Ze, F.

    2001-01-01

    Gated, microchannel-plate-based (MCP) framing cameras have been deployed worldwide for 0.2--9 keV x-ray imaging and spectroscopy of transient plasma phenomena. For a variety of spectroscopic and imaging applications, the angular sensitivity of MCPs must be known for correctly interpreting the data. We present systematic measurements of angular sensitivity at discrete relevant photon energies and arbitrary MCP gain. The results can been accurately predicted by using a simple two-dimensional approximation to the three-dimensional MCP geometry and by averaging over all possible photon ray paths.

  5. Angular Sensitivity of Gated Micro-Channel Plate Framing Cameras

    SciTech Connect

    Landen, O L; Lobban, A; Tutt, T; Bell, P M; Costa, R; Ze, F

    2000-07-24

    Gated, microchannel-plate-based (MCP) framing cameras have been deployed worldwide for 0.2 - 9 keV x-ray imaging and spectroscopy of transient plasma phenomena. For a variety of spectroscopic and imaging applications, the angular sensitivity of MCPs must be known for correctly interpreting the data. We present systematic measurements of angular sensitivity at discrete relevant photon energies and arbitrary MCP gain. The results can been accurately predicted by using a simple 2D approximation to the 3D MCP geometry and by averaging over all possible photon ray paths.

  6. Three-dimensional temperature field measurement of flame using a single light field camera.

    PubMed

    Sun, Jun; Xu, Chuanlong; Zhang, Biao; Hossain, Md Moinul; Wang, Shimin; Qi, Hong; Tan, Heping

    2016-01-25

    Compared with conventional camera, the light field camera takes the advantage of being capable of recording the direction and intensity information of each ray projected onto the CCD (charge couple device) sensor simultaneously. In this paper, a novel method is proposed for reconstructing three-dimensional (3-D) temperature field of a flame based on a single light field camera. A radiative imaging of a single light field camera is also modeled for the flame. In this model, the principal ray represents the beam projected onto the pixel of the CCD sensor. The radiation direction of the ray from the flame outside the camera is obtained according to thin lens equation based on geometrical optics. The intensities of the principal rays recorded by the pixels on the CCD sensor are mathematically modeled based on radiative transfer equation. The temperature distribution of the flame is then reconstructed by solving the mathematical model through the use of least square QR-factorization algorithm (LSQR). The numerical simulations and experiments are carried out to investigate the validity of the proposed method. The results presented in this study show that the proposed method is capable of reconstructing the 3-D temperature field of a flame.

  7. Three-dimensional temperature field measurement of flame using a single light field camera.

    PubMed

    Sun, Jun; Xu, Chuanlong; Zhang, Biao; Hossain, Md Moinul; Wang, Shimin; Qi, Hong; Tan, Heping

    2016-01-25

    Compared with conventional camera, the light field camera takes the advantage of being capable of recording the direction and intensity information of each ray projected onto the CCD (charge couple device) sensor simultaneously. In this paper, a novel method is proposed for reconstructing three-dimensional (3-D) temperature field of a flame based on a single light field camera. A radiative imaging of a single light field camera is also modeled for the flame. In this model, the principal ray represents the beam projected onto the pixel of the CCD sensor. The radiation direction of the ray from the flame outside the camera is obtained according to thin lens equation based on geometrical optics. The intensities of the principal rays recorded by the pixels on the CCD sensor are mathematically modeled based on radiative transfer equation. The temperature distribution of the flame is then reconstructed by solving the mathematical model through the use of least square QR-factorization algorithm (LSQR). The numerical simulations and experiments are carried out to investigate the validity of the proposed method. The results presented in this study show that the proposed method is capable of reconstructing the 3-D temperature field of a flame. PMID:26832496

  8. Reliability improvement of low-cost camera for microsatellite

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Chen, Xinhua; Chen, Yuheng; Zhou, Wang; Shen, Weimin

    2009-07-01

    Remote sensing is one of the most defective means for environment monitor, resource management, national security and so on, but existing conventional satellites are too expensive for common users to afford. Microsatellites can reduce their cost and optimize their image products for specific applications. Space camera is one of their important payloads. The trade-off faced in a cost driven camera design is how to reduce cost while still have the required reliability. This paper introduces our path to develop reliable and low-cost space camera. The space camera has two main parts: optical system and camera circuits. Commercial off-the-shelf (COTS) lenses are difficult to maintain their imaging performance under space environment. Our designed optical system adopts catadioptric layout, so that its temperature sensitivity is low. The material and structure of camera lens can bear the vibration and shock during its launch. Its mechanical reliability is approved through mechanical test. A window made of synthetic fused silica is used to protect the lens and CCD sensor from space radiation. Optical system is completed with compact structure, wide temperature range, large relative aperture, high imaging quality and pass through the mechanical test, thermal cycling and vacuum thermal test. Modular concept is developed within the space camera circuit, which is composed of seven modules which are power supply unit, microcontroller unit, waveform generator unit, CCD unit, CCD signal processor unit, LVDS unit, and current surge restrain unit. Module concept and the use of plastic-encapsulated microcircuits (PEMs) components can simplify the design and the maintainability and can minimize size, mass, and power consumption. Through the destructive physical analysis (DPA), screening, and board level burn-in select the PEMs than can replace the hermetically sealed microcircuits(HSMs). Derating, redundancy, thermal dissipation, software error detection and so on are adopted in the

  9. Reconstructing the temporal progression of HIV-1 immune response pathways

    PubMed Central

    Jain, Siddhartha; Arrais, Joel; Venkatachari, Narasimhan J.; Ayyavoo, Velpandi; Bar-Joseph, Ziv

    2016-01-01

    Motivation: Most methods for reconstructing response networks from high throughput data generate static models which cannot distinguish between early and late response stages. Results: We present TimePath, a new method that integrates time series and static datasets to reconstruct dynamic models of host response to stimulus. TimePath uses an Integer Programming formulation to select a subset of pathways that, together, explain the observed dynamic responses. Applying TimePath to study human response to HIV-1 led to accurate reconstruction of several known regulatory and signaling pathways and to novel mechanistic insights. We experimentally validated several of TimePaths’ predictions highlighting the usefulness of temporal models. Availability and Implementation: Data, Supplementary text and the TimePath software are available from http://sb.cs.cmu.edu/timepath Contact: zivbj@cs.cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307624

  10. Sampling diffusive transition paths

    SciTech Connect

    F. Miller III, Thomas; Predescu, Cristian

    2006-10-12

    We address the problem of sampling double-ended diffusive paths. The ensemble of paths is expressed using a symmetric version of the Onsager-Machlup formula, which only requires evaluation of the force field and which, upon direct time discretization, gives rise to a symmetric integrator that is accurate to second order. Efficiently sampling this ensemble requires avoiding the well-known stiffness problem associated with sampling infinitesimal Brownian increments of the path, as well as a different type of stiffness associated with sampling the coarse features of long paths. The fine-features sampling stiffness is eliminated with the use of the fast sampling algorithm (FSA), and the coarse-feature sampling stiffness is avoided by introducing the sliding and sampling (S&S) algorithm. A key feature of the S&S algorithm is that it enables massively parallel computers to sample diffusive trajectories that are long in time. We use the algorithm to sample the transition path ensemble for the structural interconversion of the 38-atom Lennard-Jones cluster at low temperature.

  11. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  12. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  13. Stratoscope 2 integrating television camera

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, test and delivery of an integrating television camera for use as the primary data sensor on Flight 9 of Stratoscope 2 is described. The system block diagrams are presented along with the performance data, and definition of the interface of the telescope with the power, telemetry, and communication system.

  14. Making Films without a Camera.

    ERIC Educational Resources Information Center

    Cox, Carole

    1980-01-01

    Describes draw-on filmmaking as an exciting way to introduce children to the plastic, fluid nature of the film medium, to develop their appreciation and understanding of divergent cinematic techniques and themes, and to invite them into the dream world of filmmaking without the need for a camera. (AEA)

  15. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  16. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  17. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  18. Camera assisted multimodal user interaction

    NASA Astrophysics Data System (ADS)

    Hannuksela, Jari; Silvén, Olli; Ronkainen, Sami; Alenius, Sakari; Vehviläinen, Markku

    2010-01-01

    Since more processing power, new sensing and display technologies are already available in mobile devices, there has been increased interest in building systems to communicate via different modalities such as speech, gesture, expression, and touch. In context identification based user interfaces, these independent modalities are combined to create new ways how the users interact with hand-helds. While these are unlikely to completely replace traditional interfaces, they will considerably enrich and improve the user experience and task performance. We demonstrate a set of novel user interface concepts that rely on built-in multiple sensors of modern mobile devices for recognizing the context and sequences of actions. In particular, we use the camera to detect whether the user is watching the device, for instance, to make the decision to turn on the display backlight. In our approach the motion sensors are first employed for detecting the handling of the device. Then, based on ambient illumination information provided by a light sensor, the cameras are turned on. The frontal camera is used for face detection, while the back camera provides for supplemental contextual information. The subsequent applications triggered by the context can be, for example, image capturing, or bar code reading.

  19. Gamma-ray camera flyby

    SciTech Connect

    2010-01-01

    Animation based on an actual classroom demonstration of the prototype CCI-2 gamma-ray camera's ability to image a hidden radioactive source, a cesium-137 line source, in three dimensions. For more information see http://newscenter.lbl.gov/feature-stories/2010/06/02/applied-nuclear-physics/.

  20. The Camera Comes to Court.

    ERIC Educational Resources Information Center

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  1. High-speed pulse camera

    NASA Technical Reports Server (NTRS)

    Lawson, J. R.

    1968-01-01

    Miniaturized, 16 mm high speed pulse camera takes spectral photometric photographs upon instantaneous command. The design includes a low-friction, low-inertia film transport, a very thin beryllium shutter driven by a low-inertia stepper motor for minimum actuation time after a pulse command, and a binary encoder.

  2. Fuzzy Visual Path Following by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Hamissi, A.; Bazoula, A.

    2008-06-01

    We present in this work a variant of a visual navigation method developed for path following by a nonholonomic mobile robot moving in an environment free of obstacles. Only an embedded CCD camera is used for perception. The integration of perception and action leads us to develop firstly a method of extraction of the useful information from each acquired image, secondly a control approach using fuzzy logic.

  3. Kinect v2 and RGB Stereo Cameras Integration for Depth Map Enhancement

    NASA Astrophysics Data System (ADS)

    Ravanelli, R.; Nascetti, A.; Crespi, M.

    2016-06-01

    Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. For this purpose, a 3D model of a DUPLOTM bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.

  4. Transitional Information in Spatial Serial Memory: Path Characteristics Affect Recall Performance

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.; Elford, Greg; Mayberry, Murray

    2005-01-01

    This study examined the role of stimulus characteristics in a visuospatial order reconstruction task in which participants were required to recall the order of sequences of spatial locations. The complexity of the to-be-remembered sequences, as measured by path crossing, path length, and angles, was found to affect serial memory, in terms of both…

  5. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  6. Counting paths in digraphs

    SciTech Connect

    Sullivan, Blair D; Seymour, Dr. Paul Douglas

    2010-01-01

    Say a digraph is k-free if it has no directed cycles of length at most k, for k {element_of} Z{sup +}. Thomasse conjectured that the number of induced 3-vertex directed paths in a simple 2-free digraph on n vertices is at most (n-1)n(n+1)/15. We present an unpublished result of Bondy proving there are at most 2n{sup 3}/25 such paths, and prove that for the class of circular interval digraphs, an upper bound of n{sup 3}/16 holds. We also study the problem of bounding the number of (non-induced) 4-vertex paths in 3-free digraphs. We show an upper bound of 4n{sup 4}/75 using Bondy's result for Thomasse's conjecture.

  7. Recent advances in digital camera optics

    NASA Astrophysics Data System (ADS)

    Ishiguro, Keizo

    2012-10-01

    The digital camera market has extremely expanded in the last ten years. The zoom lens for digital camera is especially the key determining factor of the camera body size and image quality. Its technologies have been based on several analog technological progresses including the method of aspherical lens manufacturing and the mechanism of image stabilization. Panasonic is one of the pioneers of both technologies. I will introduce the previous trend in optics of zoom lens as well as original optical technologies of Panasonic digital camera "LUMIX", and in addition optics in 3D camera system. Besides, I would like to suppose the future trend in digital cameras.

  8. An investigation of photosensor aperture shaping in facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Huck, F. O.; Wall, S. D.

    1972-01-01

    Optical-mechanical scanning techniques are generally employed in instruments specifically designed to spectrally or radiometrically characterize variations in scene brightness. The effect of aliasing, which can be caused by line scan sampling, on the spatial detail of the reconstructed image has, therefore, been of little concern. Emphasis of some recent applications of optical-mechanical scanning techniques in facsimile cameras is, however, on the spatial characterization of the scene which, as is shown, can be severely degraded by aliasing. The characteristics of aliasing are analyzed to establish quantitative bounds, and photosensor aperture shaping and line scan spacing are investigated as a means for reducing this degradation.

  9. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  10. Slant path range gated imaging of static and moving targets

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas

    2012-06-01

    This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.

  11. Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study

    NASA Astrophysics Data System (ADS)

    Berveglieri, A.; Tommaselli, A. M. G.

    2016-06-01

    A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  12. Paved Path for Opportunity

    NASA Technical Reports Server (NTRS)

    2006-01-01

    As NASA's Mars Exploration Rover Opportunity continues a southward trek from 'Erebus Crater' toward 'Victoria Crater,' the terrain consists of large sand ripples and patches of flat-lying rock outcrops, as shown in this image. Whenever possible, rover planners keep Opportunity on the 'pavement' for best mobility.

    This false-color image mosaic was assembled using images acquired by the panoramic camera on Opportunity's 784th sol (April 8, 2006) at about 11:45 a.m. local solar time. The camera used its 753-nanometer, 535-nanometer and 432-nanometer filters. This view shows a portion of the outcrop named 'Bosque,' including rover wheel tracks, fractured and finely-layered outcrop rocks and smaller, dark cobbles littered across the surface.

  13. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  14. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-09-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The TSP is composed of experts in numerous technical fields related to this project and represents the interests of the public. The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms, environmental transport, environmental monitoring data, demographics, agriculture, food habits, environmental pathways and dose estimates. 3 figs.

  15. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  16. Coherence-path duality relations for N paths

    NASA Astrophysics Data System (ADS)

    Hillery, Mark; Bagan, Emilio; Bergou, Janos; Cottrell, Seth

    2016-05-01

    For an interferometer with two paths, there is a relation between the information about which path the particle took and the visibility of the interference pattern at the output. The more path information we have, the smaller the visibility, and vice versa. We generalize this relation to a multi-path interferometer, and we substitute two recently defined measures of quantum coherence for the visibility, which results in two duality relations. The path information is provided by attaching a detector to each path. In the first relation, which uses an l1 measure of coherence, the path information is obtained by applying the minimum-error state discrimination procedure to the detector states. In the second, which employs an entropic measure of coherence, the path information is the mutual information between the detector states and the result of measuring them. Both approaches are quantitative versions of complementarity for N-path interferometers. Support provided by the John Templeton Foundation.

  17. The ITER Radial Neutron Camera Detection System

    SciTech Connect

    Marocco, D.; Belli, F.; Esposito, B.; Petrizzi, L.; Riva, M.; Bonheure, G.; Kaschuck, Y.

    2008-03-12

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and n{sub t}/n{sub d} ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 10{sup 8}-10{sup 9} n/cm{sup 2} s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  18. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  19. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  20. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  1. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  2. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  3. Gas path seal

    NASA Technical Reports Server (NTRS)

    Bill, R. C.; Johnson, R. D. (Inventor)

    1979-01-01

    A gas path seal suitable for use with a turbine engine or compressor is described. A shroud wearable or abradable by the abrasion of the rotor blades of the turbine or compressor shrouds the rotor bades. A compliant backing surrounds the shroud. The backing is a yieldingly deformable porous material covered with a thin ductile layer. A mounting fixture surrounds the backing.

  4. An Unplanned Path

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  5. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  6. Reconstruction of missing cells in fluorescent microscopy.

    PubMed

    Leung, Nat; Wan, Justin W L

    2012-01-01

    Fluorescent microscopy is one of the several types of imaging techniques used by biologists to study cell activities. One challenge of tracking cells from fluorescence microscopy is that cells in fluorescent images frequently disappear and reappear. The situation is further complicated by cell divisions, which also occur frequently in an image sequence. In this paper, we apply a level set method to reconstruct cells that disappear in an image sequence and in particular, cells that are undergoing cell division. The image frames are stacked together to form a 3D image volume. The disappearance of a cell leads to a broken cell path. We reconstruct the incomplete cell paths by a level set segmentation of the 3D image volume. If the disappearance happens during cell division, the level set method segments the visible cell paths before and after cell division, and then joins them together by extending the cell paths into the missing gap. We also propose a simple and cost-efficient method similar to inpainting techniques to capture the cell appearance when it disappears by making use of the level set function obtained from the segmentation. The idea is that the intensities of a visible cell on a level set contour are copied to the corresponding contours of a disappeared cell. We will present results for reconstruction of cells undergoing cell division for C2C12 cells in fluorescent images to illustrate the effectiveness of our method. PMID:23367131

  7. Combustion pinhole-camera system

    DOEpatents

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  8. A 10-microm infrared camera.

    PubMed

    Arens, J F; Jernigan, J G; Peck, M C; Dobson, C A; Kilk, E; Lacy, J; Gaalema, S

    1987-09-15

    An IR camera has been built at the University of California at Berkeley for astronomical observations. The camera has been used primarily for high angular resolution imaging at mid-IR wavelengths. It has been tested at the University of Arizona 61- and 90-in. telescopes near Tucson and the NASA Infrared Telescope Facility on Mauna Kea, HI. In the observations the system has been used as an imager with interference coated and Fabry-Perot filters. These measurements have demonstrated a sensitivity consistent with photon shot noise, showing that the system is limited by the radiation from the telescope and atmosphere. Measurements of read noise, crosstalk, and hysteresis have been made in our laboratory. PMID:20490151

  9. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  10. New Reconstruction Accuracy Metric for 3D PIV

    NASA Astrophysics Data System (ADS)

    Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.

  11. ISO camera array development status

    NASA Technical Reports Server (NTRS)

    Sibille, F.; Cesarsky, C.; Agnese, P.; Rouan, D.

    1989-01-01

    A short outline is given of the Infrared Space Observatory Camera (ISOCAM), one of the 4 instruments onboard the Infrared Space Observatory (ISO), with the current status of its two 32x32 arrays, an InSb charge injection device (CID) and a Si:Ga direct read-out (DRO), and the results of the in orbit radiation simulation with gamma ray sources. A tentative technique for the evaluation of the flat fielding accuracy is also proposed.

  12. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  13. Photogrammetric Reconstruction with Bayesian Information

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Vettore, A.

    2016-06-01

    Nowadays photogrammetry and laser scanning methods are the most wide spread surveying techniques. Laser scanning methods usually allow to obtain more accurate results with respect to photogrammetry, but their use have some issues, e.g. related to the high cost of the instrumentation and the typical need of high qualified personnel to acquire experimental data on the field. Differently, photogrammetric reconstruction can be achieved by means of low cost devices and by persons without specific training. Furthermore, the recent diffusion of smart devices (e.g. smartphones) embedded with imaging and positioning sensors (i.e. standard camera, GNSS receiver, inertial measurement unit) is opening the possibility of integrating more information in the photogrammetric reconstruction procedure, in order to increase its computational efficiency, its robustness and accuracy. In accordance with the above observations, this paper examines and validates new possibilities for the integration of information provided by the inertial measurement unit (IMU) into the photogrammetric reconstruction procedure, and, to be more specific, into the procedure for solving the feature matching and the bundle adjustment problems.

  14. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  15. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  16. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  17. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  18. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  19. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  20. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  1. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  2. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  3. A calibration technology for multi-camera system with various focal lengths

    NASA Astrophysics Data System (ADS)

    Yang, Ruihua; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2016-01-01

    Calibration is the basis of three-dimensional (3D) reconstruction for machine vision technology. Nowadays, the most widely used calibration method among computer vision is the technique for binocular stereo measurement. However, binocular stereo vision has limited view field which is difficult to measure large-scale mechanical components synchronously. Thus, enlarging the view field is urgent in need for the large scale measurement. With the application of multi-camera system, the calibration for cameras with different focal lengths is required. In this paper, a method aiming at calibration problems for multi-camera system of different focal lengths is proposed. An imaging model for multi-camera system with various focal lengths is analyzed. The Harris corner detector is applied to determine the relationship between signal camera and checkerboard. Finally, the external parameters of different cameras can be obtained by the link with the checkerboard. The calibration results indicate that the calculation method used in this work can calibrate multi-camera with various focal lengths.

  4. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  5. Optical characterization of UV multispectral imaging cameras for SO2 plume measurements

    NASA Astrophysics Data System (ADS)

    Stebel, K.; Prata, F.; Dauge, F.; Durant, A.; Amigo, A.,

    2012-04-01

    Only a few years ago spectral imaging cameras for SO2 plume monitoring were developed for remote sensing of volcanic plumes. We describe the development from a first camera using a single filter in the absorption band of SO2 to more advanced systems using several filters and an integrated spectrometer. The first system was based on the Hamamatsu C8484 UV camera (1344 x 1024 pixels) with high quantum efficiency in the UV region from 280 nm onward. At the heart of the second UV camera system, EnviCam, is a cooled Alta U47 camera, equipped with two on-band (310 and 315 nm) and two off-band (325 and 330 nm) filters. The third system utilizes again the uncooled Hamamatsu camera for faster sampling (~10 Hz) and a four-position filter-wheel equipped with two 10 nm filters centered at 310 and 330 nm, a UV broadband view and a blackened plate for dark-current measurement. Both cameras have been tested with lenses with different focal lengths. A co-aligned spectrometer provides a ~0.3nm resolution spectrum within the field-of-view of the camera. We describe the ground-based imaging cameras systems developed and utilized at our Institute. Custom made cylindrical quartz calibration cells with 50 mm diameter, to cover the entire field of view of the camera optics, are filled with various amounts of gaseous SO2 (typically between 100 and 1500 ppm•m). They are used for calibration and characterization of the cameras in the laboratory. We report about the procedures for monitoring and analyzing SO2 path-concentration and fluxes. This includes a comparison of the calibration in the atmosphere using the SO2 cells versus the SO2 retrieval from the integrated spectrometer. The first UV cameras have been used to monitor ship emissions (Ny-Ålesund, Svalbard and Genova, Italy). The second generation of cameras were first tested for industrial stack monitoring during a field campaign close to the Rovinari (Romania) power plant in September 2010, revealing very high SO2 emissions

  6. Generic MSFA mosaicking and demosaicking for multispectral cameras

    NASA Astrophysics Data System (ADS)

    Miao, Lidan; Qi, Hairong; Ramanath, Rajeev

    2006-02-01

    In this paper, we investigate the potential application of the multispectral filter array (MSFA) techniques in multispectral imaging for reasons like low cost, exact registration, and strong robustness. In both human and many animal visual systems, different types of photoreceptors are organized into mosaic patterns. This behavior has been emulated in the industry to develop the so-called color filter array (CFA) in the manufacture of digital color cameras. In this way, only one color component is measured at each pixel, and the sensed image is a mosaic of different color bands. We extend this idea to multispectral imaging by developing generic mosaicking and demosaicking algorithms. The binary tree-driven MSFA design process guarantees that the pixel distributions of different spectral bands are uniform and highly correlated. These spatial features facilitate the design of the generic demosaicking algorithm based on the same binary tree, which considers three interrelated issues: band selection, pixel selection and interpolation. We evaluate the reconstructed images from two aspects: better reconstruction and better target classification. The experimental results demonstrate that the mosaicking and demosaicking process preserves the image quality effectively, which further supports that the MSFA technique is a feasible solution for multispectral cameras.

  7. Nonadiabatic transition path sampling

    NASA Astrophysics Data System (ADS)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  8. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to see through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  9. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to 'see' through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  10. HHEBBES! All sky camera system: status update

    NASA Astrophysics Data System (ADS)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  11. CCD video camera and airborne applications

    NASA Astrophysics Data System (ADS)

    Sturz, Richard A.

    2000-11-01

    The human need to see for ones self and to do so remotely, has given rise to video camera applications never before imagined and growing constantly. The instant understanding and verification offered by video lends its applications to every facet of life. Once an entertainment media, video is now ever present in out daily life. The application to the aircraft platform is one aspect of the video camera versatility. Integrating the video camera into the aircraft platform is yet another story. The typical video camera when applied to more standard scene imaging poses less demanding parameters and considerations. This paper explores the video camera as applied to the more complicated airborne environment.

  12. Spectrometry with consumer-quality CMOS cameras.

    PubMed

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  13. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  14. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  15. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  16. Four paths of competition

    SciTech Connect

    Studness, C.M.

    1995-05-01

    The financial community`s focus on utility competition has been riveted on the proceedings now in progress at state regulatory commissions. The fear that something immediately damaging will come out of these proceedings seems to have diminished in recent months, and the stock market has reacted favorably. However, regulatory developments are only one of four paths leading to competition; the others are the marketplace, the legislatures, and the courts. Each could play a critical role in the emergence of competition.

  17. Simple method of modelling of digital holograms registering and their optical reconstruction

    NASA Astrophysics Data System (ADS)

    Evtikhiev, N. N.; Cheremkhin, P. A.; Krasnov, V. V.; Kurbatova, E. A.; Molodtsov, D. Yu; Porshneva, L. A.; Rodin, V. G.

    2016-08-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted.

  18. Three-dimensional reconstruction of live embryos using robotic macroscope images.

    PubMed

    Brodland, G W; Veldhuis, J H

    1998-09-01

    To determine the three-dimensional (3-D) shape of a live embryo is a technically challenging task. We show that reconstructions of live embryos can be done by collecting images from different viewing angles using a robotic macroscope, establishing point correspondences between these views by block matching, and using a new 3-D reconstruction algorithm that accommodates camera positioning errors. The algorithm assumes that the images are orthographic projections of the object and that the camera scaling factors are known. Point positions and camera errors are found simultaneously. Reconstructions of test objects and embryos show that meaningful reconstructions are possible only when camera positioning and alignment errors are accommodated since these errors can be substantial. Reconstructions of early-stage axolotl embryos were made from sets of 33 images. In a typical reconstruction, 781 points, each visible in at least three different views, were used to form 1511 triangles to represent the embryo surface. The resulting reconstruction had a mean radius of error of 0.27 pixels (1.1 microns). Mathematical properties of the reconstruction algorithm are identified and discussed. PMID:9735567

  19. A high-resolution SWIR camera via compressed sensing

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Chatterjee, Bill; Weldon, Matt

    2012-06-01

    Images from a novel shortwave infrared (SWIR, 900 nm to 1.7 μm) camera system are presented. Custom electronics and software are combined with a digital micromirror device (DMD) and a single-element sensor; the latter are commercial off-the-shelf devices, which together create a lower-cost imaging system than is otherwise available in this wavelength regime. A compressive sensing (CS) encoding schema is applied to the DMD to modulate the light that has entered the camera. This modulated light is directed to a single-element sensor and an ensemble of measurements is collected. With the data ensemble and knowledge of the CS encoding, images are computationally reconstructed. The hardware and software combination makes it possible to create images with the resolution of the DMD while employing a substantially lower-cost sensor subsystem than would otherwise be required by the use of traditional focal plane arrays (FPAs). In addition to the basic camera architecture, we also discuss a technique that uses the adaptive functionality of the DMD to search and identify regions of interest. We demonstrate adaptive CS in solar exclusion experiments where bright pixels, which would otherwise reduce dynamic range in the images, are automatically removed.

  20. PATHS groundwater hydrologic model

    SciTech Connect

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  1. Scale-unambiguous relative pose estimation of space uncooperative targets based on the fusion of three-dimensional time-of-flight camera and monocular camera

    NASA Astrophysics Data System (ADS)

    Hao, Gangtao; Du, Xiaoping; Chen, Hang; Song, Jianjun; Gao, Tengfei

    2015-05-01

    An approach of scale-unambiguous relative pose estimation for space uncooperative targets based on the fusion of low resolution three-dimensional time-of-flight camera and monocular camera is proposed. No a priori knowledge about the targets is assumed. First, a modified range-intensity Markov random field model is presented to quickly reconstruct the range value for each feature point. Second, the scale-ambiguous relative pose estimation algorithm based on extended Kalman filter-unscented Kalman filter-particle filter combination filter is designed in vision simultaneous localization and mapping framework. Third, the overall scale factor estimation approach based on range-intensity fusion image, which takes the feature points' range reconstruction uncertainty as measurement noise, is proposed for the final scale-unambiguous pose estimation. Finally, some simulations demonstrate the validity and capability of the proposed approach.

  2. Hanford Environmental Dose Reconstruction Project: Monthly Report

    SciTech Connect

    Finch, S.M.

    1990-07-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demographics, Agriculture, Food Habits, and Environmental Pathways and Dose Estimates. 3 figs.

  3. Initial laboratory evaluation of color video cameras

    SciTech Connect

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  4. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  5. In vitro evaluation of stereoscopic liver surface reconstruction

    PubMed Central

    Karwan, Adam; Zylkowski, Jaroslaw; Wróblewski, Tadeusz

    2013-01-01

    Introduction Tracking abdominal motion of organs is an important factor in image-guided navigation systems. The paper presents the evaluation methodology of a practical approach to measure liver motion, both respiratory and laparoscopic, with a tool guided in the operating room. Aim Evaluation of the methodology of a practical approach to measure liver motion, both respiratory and laparoscopic, with a tool guided in the operating room. Material and methods The presented evaluation method is based on standard operating room equipment, i.e. laparoscopic cameras. We decided to use two rigid cameras to gain stereo in order to reconstruct characteristic points by triangulation. Our research aim was to survey the impact of three parameters on reconstruction accuracy: the number of calibration points, the imprecision of camera assembly, and the difference in resolution of images. Results Three calibration chessboard configurations were tested. The reconstructed landmark positions and residual mean square errors were presented in three phantom poses: the reference position, translated position and rotated position. Conclusions The presented approach is a development of the previous work. Our research proved the importance of a rigid stereo camera system and the use of high definition image resolution for both stages, namely calibration and reconstruction. PMID:23630559

  6. Characterization of the Series 1000 Camera System

    SciTech Connect

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  7. Digital hologram recording and stereo reconstruction from a single hologram

    NASA Astrophysics Data System (ADS)

    Arrifano, Angelo; Antonini, Marc; Fiadeiro, Paulo T.; Pereira, Manuela

    2013-09-01

    In this paper we present novel results on the reconstruction of stereoscopic information from a single phase-shift hologram captured using a 2:2 μm pixel-pitch CMOS camera in a holographic interferometer configuration. The low pixel-pitch camera allows the digitizing of holograms with a higher spatial-frequency than what has been reported in the literature, allowing the recording of macroscopic objects closer to the camera sensor. The reconstructed information can be visualized using 3D stereo glasses. From the perceived 3D we could identify several depth cues, including the occlusion effect which has not been easy to produce from single-aperture holography. The occlusion effect is also known to be difficult to produce from stereoscopic sources.

  8. Fringe projection profilometry for panoramic 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Almaraz-Cabral, César-Cruz; Gonzalez-Barbosa, José-Joel; Villa, Jesús; Hurtado-Ramos, Juan-Bautista; Ornelas-Rodriguez, Francisco-Javier; Córdova-Esparza, Diana-Margarita

    2016-03-01

    In this paper, we introduce a panoramic profilometric system to reconstruct inner cylindrical environments. The system projects circular fringes and uses a temporal phase unwrapping technique. The recovered phase map is used to reconstruct objects placed on the inner cylindrical surface. We derived a phase to depth conversion formula for this system. The use of fringe projection allows dense reconstructions. The panoramic system is composed by a digital projector, two parabolic mirrors and a CCD camera. All these components share a common axis with a reference cylinder. This paper presents results for distinct objects.

  9. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  10. PDX infrared TV camera system

    SciTech Connect

    Jacobsen, R.A.

    1981-08-01

    An infrared TV camera system has been developed for use on PDX. This system is capable of measuring the temporal and spatial energy deposition on the limiters and divertor neutralizer plates; time resolutions of 1 ms are achievable. The system has been used to measure the energy deposition on the PDX neutralizer plates and the temperature jump of limiter surfaces during a pulse. The energy scrapeoff layer is found to have characteristic dimensions of the order of a cm. The measurement of profiles is very sensitive to variations in the thermal emissivity of the surfaces.

  11. Cryogenic mechanism for ISO camera

    NASA Astrophysics Data System (ADS)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  12. Three-view stereoscopy in dusty plasmas under microgravity: A calibration and reconstruction approach

    SciTech Connect

    Himpel, Michael; Buttenschoen, Birger; Melzer, Andre

    2011-05-15

    A three-camera stereoscopy setup is presented that allows to reconstruct the trajectories of particles in dusty plasmas under microgravity. The calibration procedure for the three-camera setup takes the special circumstances into account that occur in close-range imaging of small particles. Additionally, a reconstruction algorithm is presented that is based on the epipolar geometry and delivers the essential particle correspondences. Further improvements are achieved by analyzing the dynamic particle behavior. Two applications of our calibration and reconstruction procedure are presented: A two-dimensional dust structure in the laboratory with a large percentage of hidden particles, and particles inside the void of a dust cloud under microgravity.

  13. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  14. Optimising camera traps for monitoring small mammals.

    PubMed

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  15. Environment reconstruction for robot navigation

    SciTech Connect

    Bohn, S.; Thornton, E.

    1994-04-01

    The United State Department of Energy (DOE) is facing a large task in characterizing and remediating waste tanks and their contents. Because of the hazardous materials inside the waste tanks, all of the work must be done remotely. The purpose of this paper is to show how to reconstruct an enclosed environment from various scans of a Laser Range Finder (LRF). The reconstructed environment can then be used by a robot for path planning, and by an operator to monitor the progress of the waste remediation process. Environment reconstruction consists of two tasks: image processing and laser sculpting. The image processing task focuses first on reducing the quantity of low-confidence data and on smoothing random fluctuations in the data. Then the processed range data must be converted into an XYZ Cartesian coordinate space, a process for which we examined two methods. The first method is a geometrical transform of the LRF data. The second uses an artificial neural network to transform the data to XYZ coordinates. Once an XYZ data set is computed, laser sculpting can be performed. Laser sculpting employs a hierarchical tree structure formally called an octree. The octree structure allows efficient storage of volumetric data and the ability to fuse multiple data sets. Our research has allowed us to examine the difficulties of fusing multiple LRF scans into an octree and to develop algorithms for converting an octree structure into a representation of polygon surfaces.

  16. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  17. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decision maker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its

  18. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decisionmaker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its content

  19. Hanford Environmental Dose Reconstruction Project. Monthly report

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-04-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates.

  20. Hanford Environmental Dose Reconstruction Project Monthly Report

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-03-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  1. Hanford Environmental Dose Reconstruction Project. Monthly report

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  2. Hanford Environmental Dose Reconstruction Project monthly report

    SciTech Connect

    Finch, S.M.

    1991-10-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doeses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  3. Hanford Environmental Dose Reconstruction Project. Monthly report

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  4. Volcano surveillance using infrared cameras

    NASA Astrophysics Data System (ADS)

    Spampinato, Letizia; Calvari, Sonia; Oppenheimer, Clive; Boschi, Enzo

    2011-05-01

    Volcanic eruptions are commonly preceded, accompanied, and followed by variations of a number of detectable geophysical and geochemical manifestations. Many remote sensing techniques have been applied to tracking anomalies and eruptive precursors, and monitoring ongoing volcanic eruptions, offering obvious advantages over in situ techniques especially during hazardous activity. Whilst spaceborne instruments provide a distinct advantage for collecting data remotely in this regard, they still cannot match the spatial detail or time resolution achievable using portable imagers on the ground or aircraft. Hand-held infrared camera technology has advanced significantly over the last decade, resulting in a proliferation of commercially available instruments, such that volcano observatories are increasingly implementing them in monitoring efforts. Improved thermal surveillance of active volcanoes has not only enhanced hazard assessment but it has contributed substantially to understanding a variety of volcanic processes. Drawing on over a decade of operational volcano surveillance in Italy, we provide here a critical review of the application of infrared thermal cameras to volcano monitoring. Following a summary of key physical principles, instrument capabilities, and the practicalities and methods of data collection, we discuss the types of information that can be retrieved from thermal imagery and what they have contributed to hazard assessment and risk management, and to physical volcanology. With continued developments in thermal imager technology and lower instrument costs, there will be increasing opportunity to gather valuable observations of volcanoes. It is thus timely to review the state of the art and we hope thereby to stimulate further research and innovation in this area.

  5. Toward the camera rain gauge

    NASA Astrophysics Data System (ADS)

    Allamano, P.; Croci, A.; Laio, F.

    2015-03-01

    We propose a novel technique based on the quantitative detection of rain intensity from images, i.e., from pictures taken in rainy conditions. The method is fully analytical and based on the fundamentals of camera optics. A rigorous statistical framing of the technique allows one to obtain the rain rate estimates in terms of expected values and associated uncertainty. We show that the method can be profitably applied to real rain events, and we obtain promising results with errors of the order of ±25%. A precise quantification of the method's accuracy will require a more systematic and long-term comparison with benchmark measures. The significant step forward with respect to standard rain gauges resides in the possibility to retrieve measures at very high temporal resolution (e.g., 30 measures per minute) at a very low cost. Perspective applications include the possibility to dramatically increase the spatial density of rain observations by exporting the technique to crowdsourced pictures of rain acquired with cameras and smartphones.

  6. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    SciTech Connect

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W.

    2006-03-15

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.

  7. Portage and Path Dependence*

    PubMed Central

    Bleakley, Hoyt; Lin, Jeffrey

    2012-01-01

    We examine portage sites in the U.S. South, Mid-Atlantic, and Midwest, including those on the fall line, a geomorphological feature in the southeastern U.S. marking the final rapids on rivers before the ocean. Historically, waterborne transport of goods required portage around the falls at these points, while some falls provided water power during early industrialization. These factors attracted commerce and manufacturing. Although these original advantages have long since been made obsolete, we document the continuing importance of these portage sites over time. We interpret these results as path dependence and contrast explanations based on sunk costs interacting with decreasing versus increasing returns to scale. PMID:23935217

  8. JAVA PathFinder

    NASA Technical Reports Server (NTRS)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  9. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  10. Guidewire path tracking and segmentation in 2D fluoroscopic time series using device paths from previous frames

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Mistretta, Charles A.

    2016-03-01

    Recent efforts to perform a 3D reconstruction of interventional devices such as guidewires from monoplane and biplane fluoroscopic images require the segmentation of the exact device path in the respective 2D projection images. The segmentation of the device in low dose fluoroscopy images can be challenging since noise and motion artifacts degrade the image quality. Additionally, extracting the device path from the segmented region may result in ambiguous results due to overlapping device parts or discontinuities in the device segmentation. The purpose of this work is to present a novel guidewire tracking and segmentation algorithm, which segments the device region based on three different features based on a ridge detection filter, noise reduction for curvilinear structures as well as an a priori probability map. The features are calculated from background subtracted as well as unsubtracted fluoroscopic images. The device path extraction is based on a topology preserving thinning algorithm followed by a path search, which minimizes a cost function based on distance and directional difference between adjacent segments as well as the similarity to the device path extracted from the previous frame. The quantitative evaluation was performed using 7 data sets acquired from a canine study. Device shapes with different complexities are compared to semi-automatic segmentations. An average segmentation accuracy of 0.50 0.41 mm was achieved where each point along the device was compared to the point on the reference device centerline with the same distance to the device tip. In all cases the device path could be resolved correctly, which would allow a more accurate and reliable reconstruction of the 3D path of the device.

  11. Validation of a 2D multispectral camera: application to dermatology/cosmetology on a population covering five skin phototypes

    NASA Astrophysics Data System (ADS)

    Jolivot, Romuald; Nugroho, Hermawan; Vabres, Pierre; Ahmad Fadzil, M. H.; Marzani, Franck

    2011-07-01

    This paper presents the validation of a new multispectral camera specifically developed for dermatological application based on healthy participants from five different Skin PhotoTypes (SPT). The multispectral system provides images of the skin reflectance at different spectral bands, coupled with a neural network-based algorithm that reconstructs a hyperspectral cube of cutaneous data from a multispectral image. The flexibility of neural network based algorithm allows reconstruction at different wave ranges. The hyperspectral cube provides both high spectral and spatial information. The study population involves 150 healthy participants. The participants are classified based on their skin phototype according to the Fitzpatrick Scale and population covers five of the six types. The acquisition of a participant is performed at three body locations: two skin areas exposed to the sun (hand, face) and one area non exposed to the sun (lower back) and each is reconstructed at 3 different wave ranges. The validation is performed by comparing data acquired from a commercial spectrophotometer with the reconstructed spectrum obtained from averaging the hyperspectral cube. The comparison is calculated between 430 to 740 nm due to the limit of the spectrophotometer used. The results reveal that the multispectral camera is able to reconstruct hyperspectral cube with a goodness of fit coefficient superior to 0,997 for the average of all SPT for each location. The study reveals that the multispectral camera provides accurate reconstruction of hyperspectral cube which can be used for analysis of skin reflectance spectrum.

  12. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  13. Laboratory calibration and characterization of video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  14. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  15. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  16. The European Fireball Network 2011 - Status of Cameras and Observation Results in Germany

    NASA Astrophysics Data System (ADS)

    Flohrer, J.; Oberst, J.; Heinlein, D.; Grau, T.

    2012-09-01

    The European Fireball Network (EN) has been continuously operating since 1966 (Figure 1). Beginning in 1995, observing stations in Germany have been managed and operated by the DLR Institute of Planetary Research, Berlin. The stations in Germany are of the classical type, consisting of cameras on a tripod, looking down and taking images of a paraboloidal mirror. Rotating shutters mounted in front of the camera lens provide velocity information for the fast-moving meteors. Cameras are equipped with film. Typically, one longexposure image is taken every night, covering the whole sky (Figure 1). In 2011, 14 cameras were in regular operation. 59 fireballs on 81 photographs could be recorded, representing an extraordinary "fireball yield". The number of 78 fireball co-registrations with other central-European camera systems was extraordinary as well. Data reduction and orbit reconstruction (carried out at Ondřejov Observatory, P. Spurný and team) was possible for 6 meteors. The brightest meteor, registered on May 4, had a magnitude of -10. In the area monitored by the cameras, one fireball was recorded (Figure 1), following which, with high probability, a meteorite fall occurred. Unfortunately, due to terrain conditions within the urban area of Berlin no meteorites could be recovered.

  17. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  18. Multiplex imaging with multiple-pinhole cameras

    NASA Technical Reports Server (NTRS)

    Brown, C.

    1974-01-01

    When making photographs in X rays or gamma rays with a multiple-pinhole camera, the individual images of an extended object such as the sun may be allowed to overlap. Then the situation is in many ways analogous to that in a multiplexing device such as a Fourier spectroscope. Some advantages and problems arising with such use of the camera are discussed, and expressions are derived to describe the relative efficacy of three exposure/postprocessing schemes using multiple-pinhole cameras.

  19. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  20. The CTIO CCD-TV acquisition camera

    NASA Astrophysics Data System (ADS)

    Walker, Alistair R.; Schmidt, Ricardo

    A prototype CCD-TV camera has been built at CTIO, conceptually similar to the cameras in use at Lick Observatory. A GEC CCD is used as the detector, cooled thermo-electrically to -45C. Pictures are displayed via an IBM PC clone computer and an ITI image display board. Results of tests at the CTIO telescopes are discussed, including comparisons with the RCA ISIT cameras used at present for acquisition and guiding.

  1. Recent advances in MPEG-7 cameras

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ebrahimi, Touradj

    2006-08-01

    We propose a smart camera which performs video analysis and generates an MPEG-7 compliant stream. By producing a content-based metadata description of the scene, the MPEG-7 camera extends the capabilities of conventional cameras. The metadata is then directly interpretable by a machine. This is especially helpful in a number of applications such as video surveillance, augmented reality and quality control. As a use case, we describe an algorithm to identify moving objects and produce the corresponding MPEG-7 description. The algorithm runs in real-time on a Matrox Iris P300C camera.

  2. True-color night vision cameras

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Gat, Nahum

    2007-04-01

    This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased information content and has proven to enable better situational awareness, faster response time, and more accurate target identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where there is inherent information in the color of markings and lights, are example applications that can benefit from True-Color Night Vision technology. Two different prototype cameras, employing two different true-color night vision technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the performance of standard (monochrome) night vision cameras, are reported and compared under various operating conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the objective assessment of such cameras are used in this analysis.

  3. Omnidirectional underwater camera design and calibration.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  4. Omnidirectional underwater camera design and calibration.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-03-12

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  5. High-speed cameras at Los Alamos

    NASA Astrophysics Data System (ADS)

    Brixner, Berlyn

    1997-05-01

    In 1943, there was no camera with the microsecond resolution needed for research in Atomic Bomb development. We had the Mitchell camera (100 fps), the Fastax (10 000), the Marley (100 000), the drum streak (moving slit image) 10-5 s resolution, and electro-optical shutters for 10-6 s. Julian Mack invented a rotating-mirror camera for 10-7 s, which was in use by 1944. Small rotating mirror changes secured a resolution of 10-8 s. Photography of oscilloscope traces soon recorded 10-6 resolution, which was later improved to 10-8 s. Mack also invented two time resolving spectrographs for studying the radiation of the first atomic explosion. Much later, he made a large aperture spectrograph for shock wave spectra. An image dissecting drum camera running at 107 frames per second (fps) was used for studying high velocity jets. Brixner invented a simple streak camera which gave 10-8 s resolution. Using a moving film camera, an interferometer pressure gauge was developed for measuring shock-front pressures up to 100 000 psi. An existing Bowen 76-lens frame camera was speeded up by our turbine driven mirror to make 1 500 000 fps. Several streak cameras were made with writing arms from 4 1/2 to 40 in. and apertures from f/2.5 to f/20. We made framing cameras with top speeds of 50 000, 1 000 000, 3 500 000, and 14 000 000 fps.

  6. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  7. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  8. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  9. Neuromagnetic source reconstruction

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.; Leahy, R.M.

    1994-12-31

    In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.

  10. Surface reconstruction for 3D remote sensing

    NASA Astrophysics Data System (ADS)

    Baran, Matthew S.; Tutwiler, Richard L.; Natale, Donald J.

    2012-05-01

    This paper examines the performance of the local level set method on the surface reconstruction problem for unorganized point clouds in three dimensions. Many laser-ranging, stereo, and structured light devices produce three dimensional information in the form of unorganized point clouds. The point clouds are sampled from surfaces embedded in R3 from the viewpoint of a camera focal plane or laser receiver. The reconstruction of these objects in the form of a triangulated geometric surface is an important step in computer vision and image processing. The local level set method uses a Hamilton-Jacobi partial differential equation to describe the motion of an implicit surface in threespace. An initial surface which encloses the data is allowed to move until it becomes a smooth fit of the unorganized point data. A 3D point cloud test suite was assembled from publicly available laser-scanned object databases. The test suite exhibits nonuniform sampling rates and various noise characteristics to challenge the surface reconstruction algorithm. Quantitative metrics are introduced to capture the accuracy and efficiency of surface reconstruction on the degraded data. The results characterize the robustness of the level set method for surface reconstruction as applied to 3D remote sensing.

  11. Iterative reconstruction of volumetric particle distribution

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  12. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  13. Gesture recognition on smart cameras

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  14. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  15. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  16. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  17. Camera processing with chromatic aberration.

    PubMed

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected. PMID:25163060

  18. Study on the key technology of spectral reflectance reconstruction based on the weighted measurement matrix

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Bei, Li; Dong, Liang; Xiuhua, Ma

    2016-07-01

    In order to reconstruct the spectral reflectance accurately, a new method of spectral reflectance reconstruction based on the weighted measurement matrix is proposed in this paper. By optimizing the measurement matrix between spectral reflectance and the response of a camera, the method can improve the reconstruction accuracy. The new method is a combination of three kinds of common reflectance reconstruction methods, which are the pseudo inverse method, the Wiener estimation method and the principal component analysis method. The new measurement matrix can be achieved after weighting the measurement matrices of these three methods to reconstruct the spectral reflectance. What is more, the weights of the three methods can be obtained by minimizing the color difference. Results show that the CIE1976 color difference and RMSE value of the weighted reconstructed spectra are less than that of three common reconstruction methods. The spectral matching accuracy GFC of the method is higher than 0.99 and its reconstruction accuracy is high.

  19. Computational Techniques in Radio Neutrino Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Beydler, M.; ARA Collaboration

    2016-03-01

    The Askaryan Radio Array (ARA) is a high-energy cosmic neutrino detector constructed with stations of radio antennas buried in the ice at the South Pole. Event reconstruction relies on the analysis of the arrival times of the transient radio signals generated by neutrinos interacting within a few kilometers of the detector. Because of its depth dependence, the index of refraction in the ice complicates the interferometric directional reconstruction of possible neutrino events. Currently, there is an ongoing endeavor to enhance the programs used for the time-consuming computations of the curved paths of the transient wave signals in the ice as well as the interferometric beamforming. We have implemented a fast, multi-dimensional spline table lookup of the wave arrival times in order to enable raytrace-based directional reconstructions. Additionally, we have applied parallel computing across multiple Graphics Processing Units (GPUs) in order to perform the beamforming calculations quickly.

  20. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.