Science.gov

Sample records for camera path reconstruction

  1. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  2. Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.

    PubMed

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-02-10

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  3. Nonholonomic catheter path reconstruction using electromagnetic tracking

    NASA Astrophysics Data System (ADS)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  4. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  5. Reconstructing spectral reflectance from digital camera through samples selection

    NASA Astrophysics Data System (ADS)

    Cao, Bin; Liao, Ningfang; Yang, Wenming; Chen, Haobo

    2016-10-01

    Spectral reflectance provides the most fundamental information of objects and is recognized as the "fingerprint" of them, since reflectance is independent of illumination and viewing conditions. However, reconstructing high-dimensional spectral reflectance from relatively low-dimensional camera outputs is an illposed problem and most of methods requaired camera's spectral responsivity. We propose a method to reconstruct spectral reflectance from digital camera outputs without prior knowledge of camera's spectral responsivity. This method respectively averages reflectances of selected subset from main training samples by prescribing a limit to tolerable color difference between the training samples and the camera outputs. Different tolerable color differences of training samples were investigated with Munsell chips under D65 light source. Experimental results show that the proposed method outperforms classic PI method in terms of multiple evaluation criteria between the actual and the reconstructed reflectances. Besides, the reconstructed spectral reflectances are between 0-1, which make them have actual physical meanings and better than traditional methods.

  6. Anisotropic path searching for automatic neuron reconstruction.

    PubMed

    Xie, Jun; Zhao, Ting; Lee, Tzumin; Myers, Eugene; Peng, Hanchuan

    2011-10-01

    Full reconstruction of neuron morphology is of fundamental interest for the analysis and understanding of their functioning. We have developed a novel method capable of automatically tracing neurons in three-dimensional microscopy data. In contrast to template-based methods, the proposed approach makes no assumptions about the shape or appearance of neurite structure. Instead, an efficient seeding approach is applied to capture complex neuronal structures and the tracing problem is solved by computing the optimal reconstruction with a weighted graph. The optimality is determined by the cost function designed for the path between each pair of seeds and by topological constraints defining the component interrelations and completeness. In addition, an automated neuron comparison method is introduced for performance evaluation and structure analysis. The proposed algorithm is computationally efficient and has been validated using different types of microscopy data sets including Drosophila's projection neurons and fly neurons with presynaptic sites. In all cases, the approach yielded promising results.

  7. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  8. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  9. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  10. Robust 3D reconstruction with an RGB-D camera.

    PubMed

    Wang, Kangkan; Zhang, Guofeng; Bao, Hujun

    2014-11-01

    We present a novel 3D reconstruction approach using a low-cost RGB-D camera such as Microsoft Kinect. Compared with previous methods, our scanning system can work well in challenging cases where there are large repeated textures and significant depth missing problems. For robust registration, we propose to utilize both visual and geometry features and combine SFM technique to enhance the robustness of feature matching and camera pose estimation. In addition, a novel prior-based multicandidates RANSAC is introduced to efficiently estimate the model parameters and significantly speed up the camera pose estimation under multiple correspondence candidates. Even when serious depth missing occurs, our method still can successfully register all frames together. Loop closure also can be robustly detected and handled to eliminate the drift problem. The missing geometry can be completed by combining multiview stereo and mesh deformation techniques. A variety of challenging examples demonstrate the effectiveness of the proposed approach.

  11. Plasma tomographic reconstruction from tangentially viewing camera with background subtraction

    SciTech Connect

    Odstrčil, M.; Mlynář, J.; Weinzettl, V.; Háček, P.; Verdoolaege, G.; Berta, M.

    2014-01-15

    Light reflections are one of the main and often underestimated issues of plasma emissivity reconstruction in visible light spectral range. Metallic and other specular components of tokamak generate systematic errors in the optical measurements that could lead to wrong interpretation of data. Our analysis is performed at data from the tokamak COMPASS. It is a D-shaped tokamak with specular metallic vessel and possibility of the H-mode plasma. Data from fast visible light camera were used for tomographic reconstruction with background reflections subtraction to study plasma boundary. In this article, we show that despite highly specular tokamak wall, it is possible to obtain a realistic reconstruction. The developed algorithm shows robust results despite of systematic errors in the optical measurements and calibration. The motivation is to obtain an independent estimate of the plasma boundary shape.

  12. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  13. Image reconstruction methods for the PBX-M pinhole camera.

    PubMed

    Holland, A; Powell, E T; Fonck, R J

    1991-09-10

    We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)].

  14. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  15. Mini gamma cameras for intra-operative nuclear tomographic reconstruction.

    PubMed

    Matthies, Philipp; Gardiazabal, José; Okur, Aslı; Vogel, Jakob; Lasser, Tobias; Navab, Nassir

    2014-12-01

    Nuclear imaging modalities like PET or SPECT are in extensive use in medical diagnostics. In a move towards personalized therapy, we present a flexible nuclear tomographic imaging system to enable intra-operative SPECT-like 3D imaging. The system consists of a miniaturized gamma camera mounted on a robot arm for flexible positioning, while spatio-temporal localization is provided by an optical tracking system. To facilitate statistical tomographic reconstruction of the radiotracer distribution using a maximum likelihood approach, a precise model of the mini gamma camera is generated by measurements. The entire system is evaluated in a series of experiments using a hot spot phantom, with a focus on criteria relevant for the intra-operative workflow, namely the number of required imaging positions as well as the required imaging time. The results show that high quality reconstructed images of simple hot spot configurations with positional errors of less than one millimeter are possible within acquisition times as short as 15s.

  16. Superficial vessel reconstruction with a multiview camera system

    PubMed Central

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  17. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  18. Stereo Reconstruction of Atmospheric Cloud Surfaces from Fish-Eye Camera Images

    NASA Astrophysics Data System (ADS)

    Katai-Urban, G.; Otte, V.; Kees, N.; Megyesi, Z.; Bixel, P. S.

    2016-06-01

    In this article a method for reconstructing atmospheric cloud surfaces using a stereo camera system is presented. The proposed camera system utilizes fish-eye lenses in a flexible wide baseline camera setup. The entire workflow from the camera calibration to the creation of the 3D point set is discussed, but the focus is mainly on cloud segmentation and on the image processing steps of stereo reconstruction. Speed requirements, geometric limitations, and possible extensions of the presented method are also covered. After evaluating the proposed method on artificial cloud images, this paper concludes with results and discussion of possible applications for such systems.

  19. Responses of blowfly motion-sensitive neurons to reconstructed optic flow along outdoor flight paths.

    PubMed

    Boeddeker, N; Lindemann, J P; Egelhaaf, M; Zeil, J

    2005-12-01

    The retinal image flow a blowfly experiences in its daily life on the wing is determined by both the structure of the environment and the animal's own movements. To understand the design of visual processing mechanisms, there is thus a need to analyse the performance of neurons under natural operating conditions. To this end, we recorded flight paths of flies outdoors and reconstructed what they had seen, by moving a panoramic camera along exactly the same paths. The reconstructed image sequences were later replayed on a fast, panoramic flight simulator to identified, motion sensitive neurons of the so-called horizontal system (HS) in the lobula plate of the blowfly, which are assumed to extract self-motion parameters from optic flow. We show that under real life conditions HS-cells not only encode information about self-rotation, but are also sensitive to translational optic flow and, thus, indirectly signal information about the depth structure of the environment. These properties do not require an elaboration of the known model of these neurons, because the natural optic flow sequences generate--at least qualitatively--the same depth-related response properties when used as input to a computational HS-cell model and to real neurons.

  20. Minimising back reflections from the common path objective in a fundus camera

    NASA Astrophysics Data System (ADS)

    Swat, A.

    2016-11-01

    Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.

  1. 3D reconstruction from images taken with a coaxial camera rig

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    A coaxial camera rig consists of a pair of cameras which acquire images along the same optical axis but at different distances from the scene using different focal length optics. The coaxial geometry permits the acquisition of image pairs through a substantially smaller opening than would be required by a traditional binocular stereo camera rig. This is advantageous in applications where physical space is limited, such as in an endoscope. 3D images acquired through an endoscope are desirable, but the lack of physical space for a traditional stereo baseline is problematic. While image acquisition along a common optical axis has been known for many years; 3D reconstruction from such image pairs has not been possible in the center region due to the very small disparity between corresponding points. This characteristic of coaxial image pairs has been called the unrecoverable point problem. We introduce a novel method to overcome the unrecoverable point problem in coaxial camera rigs, using a variational methods optimization algorithm to map pairs of optical flow fields from different focal length cameras in a coaxial camera rig. Our method uses the ratio of the optical flow fields for 3D reconstruction. This results in accurate image pair alignment and produces accurate dense depth maps. We test our method on synthetic optical flow fields and on real images. We demonstrate our method's accuracy by evaluating against a ground-truth. Accuracy is comparable to a traditional binocular stereo camera rig, but without the traditional stereo baseline and with substantially smaller occlusions.

  2. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  3. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  4. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  5. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  6. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m(3)) to 1:7000 (4.5×2.2×1.5m(3)) in agreement with the

  7. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  8. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  9. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    NASA Astrophysics Data System (ADS)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  10. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  11. First use of mini gamma cameras for intra-operative robotic SPECT reconstruction.

    PubMed

    Matthies, Philipp; Sharma, Kanishka; Okur, Ash; Gardiazabal, José; Vogel, Jakob; Lasserl, Tobias; Navab, Nassir

    2013-01-01

    Different types of nuclear imaging systems have been used in the past, starting with pre-operative gantry-based SPECT systems and gamma cameras for 2D imaging of radioactive distributions. The main applications are concentrated on diagnostic imaging, since traditional SPECT systems and gamma cameras are bulky and heavy. With the development of compact gamma cameras with good resolution and high sensitivity, it is now possible to use them without a fixed imaging gantry. Mounting the camera onto a robot arm solves the weight issue, while also providing a highly repeatable and reliable acquisition platform. In this work we introduce a novel robotic setup performing scans with a mini gamma camera, along with the required calibration steps, and show the first SPECT reconstructions. The results are extremely promising, both in terms of image quality as well as reproducibility. In our experiments, the novel setup outperformed a commercial fhSPECT system, reaching accuracies comparable to state-of-the-art SPECT systems.

  12. Semantically Documenting Virtual Reconstruction: Building a Path to Knowledge Provenance

    NASA Astrophysics Data System (ADS)

    Bruseker, G.; Guillem, A.; Carboni, N.

    2015-08-01

    The outcomes of virtual reconstructions of archaeological monuments are not just images for aesthetic consumption but rather present a scholarly argument and decision making process. They are based on complex chains of reasoning grounded in primary and secondary evidence that enable a historically probable whole to be reconstructed from the partial remains left in the archaeological record. This paper will explore the possibilities for documenting and storing in an information system the phases of the reasoning, decision and procedures that a modeler, with the support of an archaeologist, uses during the virtual reconstruction process and how they can be linked to the reconstruction output. The goal is to present a documentation model such that the foundations of evidence for the reconstructed elements, and the reasoning around them, are made not only explicit and interrogable but also can be updated, extended and reused by other researchers in future work. Using as a case-study the reconstruction of a kitchen in a Roman domus in Grand, we will examine the necessary documentation requirements, and the capacity to express it using semantic technologies. For our study we adopt the CIDOC-CRM ontological model, and its extensions CRMinf, CRMBa and CRMgeo as a starting point for modelling the arguments and relations.

  13. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  14. A novel three-dimensional imaging method by means of coded cameras array recording and computational reconstruction

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2007-04-01

    In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc.

  15. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  16. Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction

    SciTech Connect

    Mersmann, Sven; Seitel, Alexander; Maier-Hein, Lena; Erz, Michael; Jähne, Bernd; Nickel, Felix; Mieth, Markus; Mehrabi, Arianeb

    2013-08-15

    Purpose: In image-guided surgery (IGS) intraoperative image acquisition of tissue shape, motion, and morphology is one of the main challenges. Recently, time-of-flight (ToF) cameras have emerged as a new means for fast range image acquisition that can be used for multimodal registration of the patient anatomy during surgery. The major drawbacks of ToF cameras are systematic errors in the image acquisition technique that compromise the quality of the measured range images. In this paper, we propose a calibration concept that, for the first time, accounts for all known systematic errors affecting the quality of ToF range images. Laboratory and in vitro experiments assess its performance in the context of IGS.Methods: For calibration the camera-related error sources depending on the sensor, the sensor temperature and the set integration time are corrected first, followed by the scene-specific errors, which are modeled as function of the measured distance, the amplitude and the radial distance to the principal point of the camera. Accounting for the high accuracy demands in IGS, we use a custom-made calibration device to provide reference distance data, the cameras are calibrated too. To evaluate the mitigation of the error, the remaining residual error after ToF depth calibration was compared with that arising from using the manufacturer routines for several state-of-the-art ToF cameras. The accuracy of reconstructed ToF surfaces was investigated after multimodal registration with computed tomography (CT) data of liver models by assessment of the target registration error (TRE) of markers introduced in the livers.Results: For the inspected distance range of up to 2 m, our calibration approach yielded a mean residual error to reference data ranging from 1.5 ± 4.3 mm for the best camera to 7.2 ± 11.0 mm. When compared to the data obtained from the manufacturer routines, the residual error was reduced by at least 78% from worst calibration result to most accurate

  17. Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution

    NASA Astrophysics Data System (ADS)

    Archer, Glen E.; Bos, Jeremy P.; Roggemann, Michael C.

    2013-08-01

    All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. This work explores the mean square error (MSE) performance of a multiframe blind deconvolution (MFBD) technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate, and severe turbulence conditions. Each set consisted of 1000 simulated turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. A Gaussian noise model-based MFBD algorithm reconstructs objects that showed as much as 40% improvement in MSE with as few as 14 frames and 30 Zernike coefficients used in the reconstruction, despite the presence of anisoplanatism in the data. An MFBD algorithm based on the Poisson noise model required a minimum of 50 frames to achieve significant improvement over the average MSE for the data set. Reconstructed objects show as much as 38% improvement in MSE using 175 frames and 30 Zernike coefficients in the reconstruction.

  18. A Trajectory and Orientation Reconstruction Method for Moving Objects Based on a Moving Monocular Camera

    PubMed Central

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-01-01

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053

  19. 3D Reconstruction of Static Human Body with a Digital Camera

    NASA Astrophysics Data System (ADS)

    Remondino, Fabio

    2003-01-01

    Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.

  20. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    PubMed

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  1. Iterative reconstruction of SiPM light response functions in a square-shaped compact gamma camera.

    PubMed

    Morozov, Andrey; Alves, Francisco; Marcos, Joao; Martins, Raimundo; Pereira, Luis; Solovov, Vladimir; Chepel, Vitaly

    2017-02-13

    Compact gamma cameras with a square-shaped monolithic scintillator crystal and an array of silicon photomultipliers (SiPMs) are actively being developed for applications in areas such as small animal imaging, cancer diagnostics and radiotracer guided surgery. Statistical methods of position reconstruction, which are potentially superior to the traditional centroid method, require accurate knowledge of the spatial response of each photomultiplier. Using both Monte Carlo simulations and experimental data obtained with a camera prototype, we show that the spatial response of all photomultipliers (light response functions) can be parameterized with axially symmetric functions obtained iteratively from flood field irradiation data. The study was performed with a camera prototype equipped with a 30 x 30 x 2 mm3 LYSO crystal and an 8 x 8 array of SiPMs for 140 keV gamma rays. The simulations demonstrate that the images, reconstructed with the maximum likelihood method using the response obtained with the iterative approach, exhibit only minor distortions: the average difference between the reconstructed and the true positions in X and Y directions does not exceed 0.2 mm in the central area of 22 x 22 mm2 and 0.4 mm at the periphery of the camera. A similar level of image distortions is shown experimentally with the camera prototype.

  2. Shared path protection through reconstructing sharable bandwidth based on spectrum segmentation for elastic optical networks

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Zhang, Mingjia; Yi, Pengfei; Chen, Yong

    2016-12-01

    In order to address the problems of spectrum fragmentation and low sharing degree of spectrum resources in survivable elastic optical networks, an improved algorithm, called shared path protection by reconstructing sharable bandwidth based on spectrum segmentation (SPP-RSB-SS), is proposed in the paper. In the SPP-RSB-SS algorithm, for reducing the number of spectrum fragmentations and improving the success rate of spectrum allocation, the whole spectrum resource is partitioned into several spectrum segments. And each spectrum segment is allocated to the requests with the same bandwidth requirement in priority. Meanwhile, the protection path with higher spectrum sharing degree is selected through optimizing the link cost function and reconstructing sharable bandwidth. Hence, the protection path can maximize the sharable spectrum usage among multiple protection paths. The simulation results indicate that the SPP-RSB-SS algorithm can increase the sharing degree of protection spectrum effectively. Furthermore, the SPP-RSB-SS algorithm can enhance the spectrum utilization, and reduce the bandwidth blocking probability significantly.

  3. Reconstruction of an effective magnon mean free path distribution from spin Seebeck measurements in thin films

    NASA Astrophysics Data System (ADS)

    Chavez-Angel, E.; Zarate, R. A.; Fuentes, S.; Guo, E. J.; Kläui, M.; Jakob, G.

    2017-01-01

    A thorough understanding of the mean-free-path (MFP) distribution of the energy carriers is crucial to engineer and tune the transport properties of materials. In this context, a significant body of work has investigated the phonon and electron MFP distribution, however, similar studies of the magnon MFP distribution have not been carried out so far. In this work, we used thickness-dependence measurements of the longitudinal spin Seebeck (LSSE) effect of yttrium iron garnet films to reconstruct the cumulative distribution of a SSE related effective magnon MFP. By using the experimental data reported by (Guo et al 2016 Phys. Rev. X 6 031012), we adapted the phonon MFP reconstruction algorithm proposed by (Minnich 2012 Phys. Rev. Lett. 109 205901) and apply it to magnons. The reconstruction showed that magnons with different MFP contribute in different manner to the total LSSE and the effective magnon MFP distribution spreads far beyond their typical averaged values.

  4. Sunyaev-Zel'dovich cluster reconstruction in multiband bolometer camera surveys

    NASA Astrophysics Data System (ADS)

    Pires, S.; Juin, J. B.; Yvon, D.; Moudden, Y.; Anthoine, S.; Pierpaoli, E.

    2006-08-01

    We present a new method for the reconstruction of Sunyaev-Zel'dovich (SZ) galaxy clusters in future SZ-survey experiments using multiband bolometer cameras such as Olimpo, APEX, or Planck. Our goal is to optimise SZ-Cluster extraction from our observed noisy maps. None of the algorithms used in the detection chain is tuned using prior knowledge of the SZ-Cluster signal, or other astrophysical sources (Optical Spectrum, Noise Covariance Matrix, or covariance of SZ Cluster wavelet coefficients). First, a blind separation of the different astrophysical components that contribute to the observations is conducted using an Independent Component Analysis (ICA) method. This is a new application of ICA to multichannel astrophysical data analysis. Then, a recent non linear filtering technique in the wavelet domain, based on multiscale entropy and the False Discovery Rate (FDR) method, is used to detect and reconstruct the galaxy clusters. We use the Source Extractor software to identify the detected clusters. The proposed method was applied on realistic simulations of observations that we produced as mixtures of synthetic maps of the four brightest light sources in the range 143 GHz to 600 GHz namely the Sunyaev-Zel'dovich effect, the Cosmic Microwave Background (CMB) anisotropies, the extragalactic InfraRed point sources and the Galactic Dust Emission. We also implemented a simple model of optics and noise to account for instrumental effects. Assuming nominal performance for the near future SZ-survey Olimpo, our detection chain recovers 25% of the cluster of mass larger than 1014 M⊙, with 90% purity. Our results are compared with those obtained with published algorithms. This new method has a high global detection efficiency in the high-purity/low completeness region, being however a blind algorithm (i.e. without using any prior assumptions on the data to be extracted).

  5. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  6. Optimal UAV Path Planning for Tracking a Moving Ground Vehicle with a Gimbaled Camera

    DTIC Science & Technology

    2014-03-27

    optimization. The next step is to find the optimal path for a given ground profile by using nonlinear programming ( NLP ) to solve the optimal control problem...that the choice of the NLP solver can have an effect on the solution. For more information about both the IPM and ASM solvers see reference [15]. The

  7. The Effect of Tissue Inhomogeneities on the Accuracy of Proton Path Reconstruction for Proton Computed Tomography

    NASA Astrophysics Data System (ADS)

    Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly

    2009-03-01

    Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.

  8. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  9. [The technology of fast spectral reconstruction in the longer optical path difference PEM-FTS].

    PubMed

    Zhang, Min-Juan; Wang, Zhao-Ba; Wang, Zhi-Bin; Li, Xiao; Li, Shi-Wei; Li, Jin-Hua

    2014-07-01

    The optical path difference of the photoelastic modulator Fourier transform spectrometers (PEM-FTS) changes rapidly and nonlinearly, while the instrument preserves the speed as high as about 10(5) interferograms per second, so that the interferograms of PEM-FTS are sampled by equal interval. In order to fleetly and accurately reconstruct these spectrums, the principle of PEM-FTS and accelerated NUFFT algorithm were studied in the present article. The accelerating NUFFT algorithm integrates interpolation based on convolution kernel and fast Fourier transform (FFT). And the velocity and precision of the algorithm are affected by the type and parameter tau of kernel function, the single-side spreading distance q and the oversampling ratio micro, and so on. In the paper these parameters were analysed, under the condition N = 1 024, q = 10, micro = 2 and tau = 1 x 10(-6) in the Gaussian scaling factor, and the accelerated NUFFT algorithm was applied to the longer optical path difference PEM-FTS to rebuild the spectrums of 632. 8 nm laser and Xenon lamp, The frequency error of the rebuilt spectrums of 632.8 nm laser is less than 0.013 52, the spent time of interpolation is less than 0.267 s. the velocity is fast and the error is less. The accelerated nonuniform fast Fourier transform is fit for the longer optical path difference PEM-FTS.

  10. Sensing and reconstruction of arbitrary light-in-flight paths by a relativistic imaging approach

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Klein, Jonathan; Bacher, Emmanuel; Metzger, Nicolas; Christnacher, Frank

    2016-10-01

    Transient light imaging is an emerging technology and interesting sensing approach for fundamental multidisciplinary research ranging from computer science to remote sensing. Recent developments in sensor technologies and computational imaging has made this emerging sensing approach a candidate for next generation sensor systems with rapidly increasing maturity but still relay on laboratory technology demonstrations. At ISL, transient light sensing is investigated by time correlated single photon counting (TCSPC). An eye-safe shortwave infrared (SWIR) TCSPC setup, consisting of an avalanche photodiode array and a pulsed fiber laser source, is used to investigate sparsely scattered light while propagating through air. Fundamental investigation of light in light are carried out with the aim to reconstruct the propagation path of arbitrary light paths. Light pulses are observed in light at various propagation angles and distances. As demonstrated, arbitrary light paths can be distinguished due to a relativistic effect leading to a distortion of temporal signatures. A novel method analyzing the time difference of arrival (TDOA) is carried out to determine the propagation angle and distance with respect to this relativistic effect. Based on our results, the performance of future laser warning receivers can be improved by the use of single photon counting imaging devices. They can detect laser light even when the laser does not directly hit the sensor or is passing at a certain distance.

  11. Three-dimensional reconstruction of helicopter blade-tip vortices using a multi-camera BOS system

    NASA Astrophysics Data System (ADS)

    Bauknecht, André; Ewers, Benjamin; Wolf, Christian; Leopold, Friedrich; Yin, Jianping; Raffel, Markus

    2015-01-01

    Noise and structural vibrations in rotorcraft are strongly influenced by interactions between blade-tip vortices and the structural components of a helicopter. As a result, knowing the three-dimensional location of vortices is highly desirable, especially for the case of full-scale helicopters under realistic flight conditions. In the current study, we present results from a flight test with a full-scale BO 105 in an open-pit mine. A background-oriented schlieren measurement system consisting of ten cameras with a natural background was used to visualize the vortices of the helicopter during maneuvering flight. Vortex filaments could be visualized and extracted up to a vortex age of 360°. Vortex instability effects were found for several flight conditions. For the camera calibration, an iterative approach using points on the helicopter fuselage was applied. Point correspondence between vortex curves in the evaluated images was established by means of epipolar geometry. A three-dimensional reconstruction of the main part of the vortex system was carried out for the first time using stereophotogrammetry. The reconstructed vortex system had good qualitative agreement with the result of an unsteady free-wake panel method simulation. A quantitative evaluation of the 3D vortex system was carried out, demonstrating the potential of the multi-camera background-oriented schlieren measurement technique for the analysis of blade-vortex interaction effects on rotorcraft.

  12. DIC image reconstruction using an energy minimization framework to visualize optical path length distribution.

    PubMed

    Koos, Krisztian; Molnár, József; Kelemen, Lóránd; Tamás, Gábor; Horvath, Peter

    2016-07-25

    Label-free microscopy techniques have numerous advantages such as low phototoxicity, simple setup and no need for fluorophores or other contrast materials. Despite their advantages, most label-free techniques cannot visualize specific cellular compartments or the location of proteins and the image formation limits quantitative evaluation. Differential interference contrast (DIC) is a qualitative microscopy technique that shows the optical path length differences within a specimen. We propose a variational framework for DIC image reconstruction. The proposed method largely outperforms state-of-the-art methods on synthetic, artificial and real tests and turns DIC microscopy into an automated high-content imaging tool. Image sets and the source code of the examined algorithms are made publicly available.

  13. DIC image reconstruction using an energy minimization framework to visualize optical path length distribution

    PubMed Central

    Koos, Krisztian; Molnár, József; Kelemen, Lóránd; Tamás, Gábor; Horvath, Peter

    2016-01-01

    Label-free microscopy techniques have numerous advantages such as low phototoxicity, simple setup and no need for fluorophores or other contrast materials. Despite their advantages, most label-free techniques cannot visualize specific cellular compartments or the location of proteins and the image formation limits quantitative evaluation. Differential interference contrast (DIC) is a qualitative microscopy technique that shows the optical path length differences within a specimen. We propose a variational framework for DIC image reconstruction. The proposed method largely outperforms state-of-the-art methods on synthetic, artificial and real tests and turns DIC microscopy into an automated high-content imaging tool. Image sets and the source code of the examined algorithms are made publicly available. PMID:27453091

  14. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  15. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  16. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  17. Filtered back-projection reconstruction for attenuation proton CT along most likely paths.

    PubMed

    Quiñones, C T; Létang, J M; Rit, S

    2016-05-07

    This work investigates the attenuation of a proton beam to reconstruct the map of the linear attenuation coefficient of a material which is mainly caused by the inelastic interactions of protons with matter. Attenuation proton computed tomography (pCT) suffers from a poor spatial resolution due to multiple Coulomb scattering (MCS) of protons in matter, similarly to the conventional energy-loss pCT. We therefore adapted a recent filtered back-projection algorithm along the most likely path (MLP) of protons for energy-loss pCT (Rit et al 2013) to attenuation pCT assuming a pCT scanner that can track the position and the direction of protons before and after the scanned object. Monte Carlo simulations of pCT acquisitions of density and spatial resolution phantoms were performed to characterize the new algorithm using Geant4 (via Gate). Attenuation pCT assumes an energy-independent inelastic cross-section, and the impact of the energy dependence of the inelastic cross-section below 100 MeV showed a capping artifact when the residual energy was below 100 MeV behind the object. The statistical limitation has been determined analytically and it was found that the noise in attenuation pCT images is 411 times and 278 times higher than the noise in energy-loss pCT images for the same imaging dose at 200 MeV and 300 MeV, respectively. Comparison of the spatial resolution of attenuation pCT images with a conventional straight-line path binning showed that incorporating the MLP estimates during reconstruction improves the spatial resolution of attenuation pCT. Moreover, regardless of the significant noise in attenuation pCT images, the spatial resolution of attenuation pCT was better than that of conventional energy-loss pCT in some studied situations thanks to the interplay of MCS and attenuation known as the West-Sherwood effect.

  18. Filtered back-projection reconstruction for attenuation proton CT along most likely paths

    NASA Astrophysics Data System (ADS)

    Quiñones, C. T.; Létang, J. M.; Rit, S.

    2016-05-01

    This work investigates the attenuation of a proton beam to reconstruct the map of the linear attenuation coefficient of a material which is mainly caused by the inelastic interactions of protons with matter. Attenuation proton computed tomography (pCT) suffers from a poor spatial resolution due to multiple Coulomb scattering (MCS) of protons in matter, similarly to the conventional energy-loss pCT. We therefore adapted a recent filtered back-projection algorithm along the most likely path (MLP) of protons for energy-loss pCT (Rit et al 2013) to attenuation pCT assuming a pCT scanner that can track the position and the direction of protons before and after the scanned object. Monte Carlo simulations of pCT acquisitions of density and spatial resolution phantoms were performed to characterize the new algorithm using Geant4 (via Gate). Attenuation pCT assumes an energy-independent inelastic cross-section, and the impact of the energy dependence of the inelastic cross-section below 100 MeV showed a capping artifact when the residual energy was below 100 MeV behind the object. The statistical limitation has been determined analytically and it was found that the noise in attenuation pCT images is 411 times and 278 times higher than the noise in energy-loss pCT images for the same imaging dose at 200 MeV and 300 MeV, respectively. Comparison of the spatial resolution of attenuation pCT images with a conventional straight-line path binning showed that incorporating the MLP estimates during reconstruction improves the spatial resolution of attenuation pCT. Moreover, regardless of the significant noise in attenuation pCT images, the spatial resolution of attenuation pCT was better than that of conventional energy-loss pCT in some studied situations thanks to the interplay of MCS and attenuation known as the West-Sherwood effect.

  19. Data Acquisition and Image Reconstruction Systems from the miniPET Scanners to the CARDIOTOM Camera

    NASA Astrophysics Data System (ADS)

    Valastván, I.; Imrek, J.; Hegyesi, G.; Molnár, J.; Novák, D.; Bone, D.; Kerek, A.

    2007-11-01

    Nuclear imaging devices play an important role in medical diagnosis as well as drug research. The first and second generation data acquisition systems and the image reconstruction library developed provide a unified hardware and software platform for the miniPET-I, miniPET-II small animal PET scanners and for the CARDIOTOM™.

  20. Comparative analysis of iterative reconstruction algorithms with resolution recovery and new solid state cameras dedicated to myocardial perfusion imaging.

    PubMed

    Brambilla, Marco; Lecchi, Michela; Matheoud, Roberta; Leva, Lucia; Lucignani, Giovanni; Marcassa, Claudio; Zoccarato, Orazio

    2017-03-23

    New technologies are available in myocardial perfusion imaging. They include new software that recovers image resolution and limits image noise, multifocal collimators and dedicated cardiac cameras in which solid-state detectors are used and all available detectors are constrained to imaging just the cardiac field of view. These innovations resulted in shortened study times or reduced administered activity to patients, while preserving image quality. Many single center and some multicenter studies have been published during the introduction of these innovations in the clinical practice. Most of these studies were lead in the framework of "agreement studies" between different methods of clinical measurement. They aimed to demonstrate that these new software/hardware solutions allow the acquisition of images with reduced acquisition time or administered activity with comparable results (as for image quality, image interpretation, perfusion defect quantification, left ventricular volumes and ejection fraction) to the standard-time or standard-dose SPECT acquired with a conventional gamma camera and reconstructed with the traditional FBP method, considered as the gold standard. The purpose of this review is to provide the reader with a comprehensive understanding of the pro and cons of the different approaches summarizing the achievements reached so far and the issues that need further investigations.

  1. Temporal resolved x-ray penumbral imaging technique using heuristic image reconstruction procedure and wide dynamic range x-ray streak camera

    SciTech Connect

    Fujioka, Shinsuke; Shiraga, Hiroyuki; Azechi, Hiroshi; Nishimura, Hiroaki; Izawa, Yasukazu; Nozaki, Shinya; Chen, Yen-wei

    2004-10-01

    Temporal resolved x-ray penumbral imaging has been developed using an image reconstruction procedure of the heuristic method and a wide dynamic range x-ray streak camera (XSC). Reconstruction procedure of the penumbral imaging is inherently intolerant to noise, a reconstructed image is strongly distorted by artifacts caused by noise in a penumbral image. Statistical fluctuation in the number of detected photon is the dominant source of noise in an x-ray image, however acceptable brightness of an image is limited by dynamic range of an XSC. The wide dynamic range XSC was used to obtain penumbral images bright enough to be reconstructed. Additionally, the heuristic method was introduced in the penumbral image reconstruction procedure. Distortion of reconstructed images is sufficiently suppressed by these improvements. Density profiles of laser driven brominated plastic and tin plasma were measured with this technique.

  2. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario.

  3. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  4. A particle filter to reconstruct a free-surface flow from a depth camera

    NASA Astrophysics Data System (ADS)

    Combés, Benoit; Heitz, Dominique; Guibert, Anthony; Mémin, Etienne

    2015-10-01

    We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone.

  5. Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction.

    PubMed

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2016-08-01

    We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.

  6. Multi-Camera Reconstruction of Fine Scale High Speed Auroral Dynamics

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.; Zettergren, M. D.; Dahlgren, H.; Goenka, C.; Akbari, H.

    2014-12-01

    The fine spatial structure of dispersive aurora is known to have ground-observable scales of less than 100 meters. The lifetime of prompt emissions is much less than 1 millisecond, and high-speed cameras have observed auroral forms with millisecond scale morphology. Satellite observations have corroborated these spatial and temporal findings. Satellite observation platforms give a very valuable yet passing glance at the auroral region and the precipitation driving the aurora. To gain further insight into the fine structure of accelerated particles driven into the ionosphere, ground-based optical instruments staring at the same region of sky can capture the evolution of processes evolving on time scales from milliseconds to many hours, with continuous sample rates of 100Hz or more. Legacy auroral tomography systems have used baselines of hundreds of kilometers, capturing a "side view" of the field-aligned auroral structure. We show that short baseline (less than 10 km), high speed optical observations fill a measurement gap between legacy long baseline optical observations and incoherent scatter radar. The ill-conditioned inverse problem typical of auroral tomography, accentuated by short baseline optical ground stations is tackled with contemporary data inversion algorithms. We leverage the disruptive electron multiplying charge coupled device (EMCCD) imaging technology and solve the inverse problem via eigenfunctions obtained from a first-principles 1-D electron penetration ionospheric model. We present the latest analysis of observed auroral events from the Poker Flat Research Range near Fairbanks, Alaska. We discuss the system-level design and performance verification measures needed to ensure consistent performance for nightly multi-terabyte data acquisition synchronized between stations to better than 1 millisecond.

  7. Reconstruction of the biomechanical transfer path of femoral head necrosis: a subject-specific finite element investigation.

    PubMed

    Zhou, Guang-Quan; Pang, Zhi-Hui; Chen, Qin-Qun; He, Wei; Chen, Zhen-Qiu; Chen, Lei-Lei; Li, Zi-Qi

    2014-09-01

    According to Wolff׳s law, the structure and function of bone are interdependent. The disruption of trabeculae in the necrotic femoral head destroys the biomechanical transfer path, increasing the risk of a collapse in the cortical bone. Hence, biomaterials are needed to promote osteogenesis to aid in the reconstruction of a similar biomechanical transfer path that can provide structural and biomechanical support to prevent and delay bone deterioration. Fibular allograft combined with impaction bone grafting (FAIBG) is a hip preservation method that provides both biological repair materials and biomechanical support. This method has been used successfully in the clinical setting, but it still lacks biomechanical insight. In this paper, we aim to provide a biomechanical basis for treatment using FAIBG, we used subject-specific finite element (FE) methods to analyse the biomechanical transfer characteristics of hip models: physiological, pathological and postoperative. The physiological model provided insight into the biomechanical transfer characteristics of the proximal femur. The pathological model showed an abnormal stress distribution that destroyed stress transfer capability. The postoperative model showed that FAIBG can reconstruct the biomechanical transfer path of the femoral head and reduce the risk of a collapse in the cortical bone. In conclusion, FAIBG seems to treat necrosis of the femoral head.

  8. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  9. Numerical analysis of the crack growth path in the cement mantle of the reconstructed acetabulum.

    PubMed

    Benbarek, Smaïl; Bachir Bouiadjra, Bel Abbes; El Mokhtar, Bouziane Mohamed; Achour, Tarik; Serier, Boualem

    2013-01-01

    In this study, we use the finite element method to analyze the propagation's path of the crack in the orthopedic cement of the total hip replacement. In fact, a small python statement was incorporated with the Abaqus software to do in loop the following operations: extracting the crack propagation direction from the previous study using the maximal circumferential stresses criterion, drawing the new path, meshing and calculating again (stresses and fracture parameters). The loop is broken when the user's desired crack length is reached (number of propagations) or the value of the mode I stress intensity factor is negative. Results show that the crack propagation's path can be influenced by human body posture. The existing of a cavity in the vicinity of the crack can change its propagation path or can absolutely attract it enough to meet it. Crack can propagate in the outward direction (toward the acetabulum bone) and cannot propagate in the opposite direction, the mode I stress intensity factor increases with the crack length and that of mode II vanishes.

  10. Terminal area automatic navigation, guidance, and control research using the Microwave Landing System (MLS). Part 4: Transition path reconstruction along a straight line path containing a glideslope change waypoint

    NASA Technical Reports Server (NTRS)

    Pines, S.

    1982-01-01

    The necessary algorithms to reconstruct the glideslope change waypoint along a straight line in the event the aircraft encounters a valid MLS update and transition in the terminal approach area are presented. Results of a simulation of the Langley B737 aircraft utilizing these algorithms are presented. The method is shown to reconstruct the necessary flight path during MLS transition resulting in zero cross track error, zero track angle error, and zero altitude error, thus requiring minimal aircraft response.

  11. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  12. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition

    PubMed Central

    Hall, Graham P.; McDonald, Paul G.

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys. PMID:27020132

  13. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: disturbance effects and species recognition.

    PubMed

    McEvoy, John F; Hall, Graham P; McDonald, Paul G

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.

  14. Contrast enhanced computed tomography and reconstruction of hepatic vascular system for transjugular intrahepatic portal systemic shunt puncture path planning

    PubMed Central

    Qin, Jian-Ping; Tang, Shan-Hong; Jiang, Ming-De; He, Qian-Wen; Chen, Hong-Bin; Yao, Xin; Zeng, Wei-Zheng; Gu, Ming

    2015-01-01

    AIM: To describe a method for the transjugular intrahepatic portal systemic shunt (TIPS) placement performed with the aid of contrast-enhanced computed tomography (CECT) and three-dimensional reconstructed vascular images (3D RVIs), and to assess its safety and effectiveness. METHODS: Four hundred and ninety patients were treated with TIPS between January 2005 and December 2012. All patients underwent liver CECT and reconstruction of 3D RVIs of the right hepatic vein to portal vein (PV) prior to the operation. The 3D RVIs were carefully reviewed to plan the puncture path from the start to target points for needle pass through the PV in the TIPS procedure. RESULTS: The improved TIPS procedure was successful in 483 (98.6%) of the 490 patients. The number of punctures attempted was one in 294 (60%) patients, 2 to 3 in 147 (30%) patients, 4 to 6 in 25 (5.1%) patients and more than 6 in 17 (3.5%) patients. Seven patients failed. Of the 490 patients, 12 had punctures into the artery, 15 into the bile duct, eight into the gallbladder, and 18 through the liver capsule. Analysis of the portograms from the 483 successful cases indicated that the puncture points were all located distally to the PV bifurcation on anteroposterior images, while the points were located proximally to the bifurcation in the three cases with intraabdominal bleeding. The complications included three cases of bleeding, of whom one died and two needed surgery. CONCLUSION: Use of CECT and 3D RVIs to plan the puncture path for TIPS procedure is safe, simple and effective for clinical use. PMID:26327770

  15. A new method of reconstructing current paths in HTS tapes with defects

    NASA Astrophysics Data System (ADS)

    Podlivaev, Alexey; Rudnev, Igor

    2017-03-01

    We propose a new method for calculating current paths in high-temperature superconductive (HTS) tapes with various defects including cracks, non-superconducting inclusions, and superconducting inclusions with lower local critical current density. The calculation method is based on a model of critical state which takes into account the dependence of the critical current on the magnetic field. The method allows us to calculate the spatial distribution of currents flowing through the defective HTS tape for both currents induced by the external magnetic field and transport currents from an external source. For both cases, we performed simulations of the current distributions in these tapes with different types of defects and have shown that the combination of the action of the magnetic field and transport current leads to a more detailed identification of the boundaries and shape of the defects. The proposed method is adapted for calculating modern superconductors in real superconducting devices and may be more useful as compared to the conventional magnetometric diagnostic studies, when the tape is affected by the magnetic field only.

  16. Estimating where and how animals travel: an optimal framework for path reconstruction from autocorrelated tracking data.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2016-03-01

    An animal's trajectory is a fundamental object of interest in movement ecology, as it directly informs a range of topics from resource selection to energy expenditure and behavioral states. Optimally inferring the mostly unobserved movement path and its dynamics from a limited sample of telemetry observations is a key unsolved problem, however. The field of geostatistics has focused significant attention on a mathematically analogous problem that has a statistically optimal solution coined after its inventor, Krige. Kriging revolutionized geostatistics and is now the gold standard for interpolating between a limited number of autocorrelated spatial point observations. Here we translate Kriging for use with animal movement data. Our Kriging formalism encompasses previous methods to estimate animal's trajectories--the Brownian bridge and continuous-time correlated random walk library--as special cases, informs users as to when these previous methods are appropriate, and provides a more general method when they are not. We demonstrate the capabilities of Kriging on a case study with Mongolian gazelles where, compared to the Brownian bridge, Kriging with a more optimal model was 10% more precise in interpolating locations and 500% more precise in estimating occurrence areas.

  17. Digital X-ray camera for quality evaluation three-dimensional topographic reconstruction of single crystals of biological macromolecules

    NASA Technical Reports Server (NTRS)

    Borgstahl, Gloria (Inventor); Lovelace, Jeff (Inventor); Snell, Edward Holmes (Inventor); Bellamy, Henry (Inventor)

    2008-01-01

    The present invention provides a digital topography imaging system for determining the crystalline structure of a biological macromolecule, wherein the system employs a charge coupled device (CCD) camera with antiblooming circuitry to directly convert x-ray signals to electrical signals without the use of phosphor and measures reflection profiles from the x-ray emitting source after x-rays are passed through a sample. Methods for using said system are also provided.

  18. A new target reconstruction method considering atmospheric refraction

    NASA Astrophysics Data System (ADS)

    Zuo, Zhengrong; Yu, Lijuan

    2015-12-01

    In this paper, a new target reconstruction method considering the atmospheric refraction is presented to improve 3D reconstruction accuracy in long rang surveillance system. The basic idea of the method is that the atmosphere between the camera and the target is partitioned into several thin layers radially in which the density is regarded as uniform; Then the reverse tracking of the light propagation path from sensor to target was carried by applying Snell's law at the interface between layers; and finally the average of the tracked target's positions from different cameras is regarded as the reconstructed position. The reconstruction experiments were carried, and the experiment results showed that the new method have much better reconstruction accuracy than the traditional stereoscopic reconstruction method.

  19. SLAM using camera and IMU sensors.

    SciTech Connect

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  20. Snapshot polarimeter fundus camera.

    PubMed

    DeHoog, Edward; Luo, Haitao; Oka, Kazuhiko; Dereniak, Eustace; Schwiegerling, James

    2009-03-20

    A snapshot imaging polarimeter utilizing Savart plates is integrated into a fundus camera for retinal imaging. Acquired retinal images can be processed to reconstruct Stokes vector images, giving insight into the polarization properties of the retina. Results for images from a normal healthy retina and retinas with pathology are examined and compared.

  1. How physics teachers approach innovation: An empirical study for reconstructing the appropriation path in the case of special relativity

    NASA Astrophysics Data System (ADS)

    de Ambrosis, Anna; Levrini, Olivia

    2010-07-01

    This paper concerns an empirical study carried out with a group of high school physics teachers engaged in the Module on relativity of a Master course on the teaching of modern physics. The study is framed within the general research issue of how to promote innovation in school via teachers’ education and how to foster fruitful interactions between research and school practice via the construction of networks of researchers and teachers. In the paper, the problems related to innovation are addressed by focusing on the phase during which teachers analyze an innovative teaching proposal in the perspective of designing their own paths for the class work. The proposal analyzed in this study is Taylor and Wheeler’s approach for teaching special relativity. The paper aims to show that the roots of problems known in the research literature about teachers’ difficulties in coping with innovative proposals, and usually related to the implementation process, can be found and addressed already when teachers approach the proposal and try to appropriate it. The study is heuristic and has been carried out in order to trace the “appropriation path,” followed by the group of teachers, in terms of the main steps and factors triggering the progressive evolution of teachers’ attitudes and competences.

  2. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  3. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  4. SU-E-J-141: Activity-Equivalent Path Length Approach for the 3D PET-Based Dose Reconstruction in Proton Therapy

    SciTech Connect

    Attili, A; Vignati, A; Giordanengo, S; Kraan, A; Dalmasso, F; Battistoni, G

    2015-06-15

    Purpose: Ion beam therapy is sensitive to uncertainties from treatment planning and dose delivery. PET imaging of induced positron emitter distributions is a practical approach for in vivo, in situ verification of ion beam treatments. Treatment verification is usually done by comparing measured activity distributions with reference distributions, evaluated in nominal conditions. Although such comparisons give valuable information on treatment quality, a proper clinical evaluation of the treatment ultimately relies on the knowledge of the actual delivered dose. Analytical deconvolution methods relating activity and dose have been studied in this context, but were not clinically applied. In this work we present a feasibility study of an alternative approach for dose reconstruction from activity data, which is based on relating variations in accumulated activity to tissue density variations. Methods: First, reference distributions of dose and activity were calculated from the treatment plan and CT data. Then, the actual measured activity data were cumulatively matched with the reference activity distributions to obtain a set of activity-equivalent path lengths (AEPLs) along the rays of the pencil beams. Finally, these AEPLs were used to deform the original dose distribution, yielding the actual delivered dose. The method was tested by simulating a proton therapy treatment plan delivering 2 Gy on a homogeneous water phantom (the reference), which was compared with the same plan delivered on a phantom containing inhomogeneities. Activity and dose distributions were were calculated by means of the FLUKA Monte Carlo toolkit. Results: The main features of the observed dose distribution in the inhomogeneous situation were reproduced using the AEPL approach. Variations in particle range were reproduced and the positions, where these deviations originated, were properly identified. Conclusions: For a simple inhomogeneous phantom the 3D dose reconstruction from PET

  5. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  6. Mean squared error performance of MFBD nonlinear scene reconstruction using speckle imaging in horizontal imaging applications

    NASA Astrophysics Data System (ADS)

    Archer, Glen E.; Bos, Jeremy P.; Roggemann, Michael C.

    2012-05-01

    Terrestrial imaging over very long horizontal paths is increasingly common in surveillance and defense systems. All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. This paper explores the Mean-Square-Error (MSE) performance of a multi-frame-blind-deconvolution-based reconstruction technique using a non-linear optimization strategy to recover a reconstructed object. Three sets of 70 images representing low, moderate and severe turbulence degraded images were simulated from a diffraction limited image taken with a professional digital camera. Reconstructed objects showed significant, 54, 22 and 14 percent improvement in mean squared error for low, moderate, and severe turbulence cases respectively.

  7. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve

  8. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  9. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  10. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  11. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  12. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  13. The DRAGO gamma camera

    SciTech Connect

    Fiorini, C.; Gola, A.; Peloso, R.; Longoni, A.; Lechner, P.; Soltau, H.; Strueder, L.; Ottobrini, L.; Martelli, C.; Lui, R.; Madaschi, L.; Belloli, S.

    2010-04-15

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm{sup 2}, coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated {sup 57}Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 deg. with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  14. Novel double path shearing interferometer in corneal topography measurements

    NASA Astrophysics Data System (ADS)

    Licznerski, Tomasz J.; Jaronski, Jaroslaw; Kosz, Dariusz

    2005-09-01

    The paper presents an approach for measurements of corneal topography by use of a patent pending double path shearing interferometer (DPSI). Laser light reflected from the surface of the cornea is divided and directed to the inputs of two interferometers. The interferometers use lateral shearing of wavefronts in two orthogonal directions. A tilt of one of the mirrors in each interferometric setup perpendicularly to the lateral shear introduces parallel carrier frequency fringes at the output of each interferometer. There is orthogonal linear polarization of the laser light used in two DPSI. Two images of fringe patters are recorded by a high resolution digital camera. The obtained fringe patterns are used for phase difference reconstruction. The phase of the wavefront was reconstructed by use of algorithms for a large grid based on discrete integration. The in vivo method can also be used for tear film stability measurement, artificial tears and contact lens tests.

  15. Path Finder

    SciTech Connect

    Rigdon, J. Brian; Smith, Marcus Daniel; Mulder, Samuel A

    2014-01-07

    PathFinder is a graph search program, traversing a directed cyclic graph to find pathways between labeled nodes. Searches for paths through ordered sequences of labels are termed signatures. Determining the presence of signatures within one or more graphs is the primary function of Path Finder. Path Finder can work in either batch mode or interactively with an analyst. Results are limited to Path Finder whether or not a given signature is present in the graph(s).

  16. Calibration method for a central catadioptric-perspective camera system.

    PubMed

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  17. SPEIR: A Ge Compton Camera

    SciTech Connect

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  18. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  19. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  20. Phase-Space Reconstruction: a Path Towards the Next Generation of Nonlinear Differential Equation Based Models and Its Implications Towards Non-Uniform Sampling Theory

    SciTech Connect

    Charles R. Tolle; Mark Pengitore

    2009-08-01

    This paper explores the overlaps between the Control community’s work on System Identification (SysID) and the Physics, Mathematics, Chaos, and Complexity communities’ work on phase-space reconstruction via time-delay embedding. There are numerous overlaps between the goals of each community. Nevertheless, the Controls community can gain new insight as well as some new very powerful tools for SysID from the latest developments within the Physics, Mathematics, Chaos, and Complexity communities. These insights are gained via the work on phase-space reconstruction of non-linear dynamics. New methods for discovering non-linear differential based equations that evolved from embedding operations can shed new light on hybrid-systems theory, Nyquest-Shannon’s Theories, and network based control theory. This paper strives to guide the Controls community towards a closer inspection of the tools and additional insights being developed within the Physics, Mathematics, Chaos, and Complexity communities for discovery of system dynamics, the first step in control system development. The paper introduces the concepts of phase-space reconstruction via time-delay embedding (made famous byWhitney, Takens, and Sauer’s Thoreoms), intergrate-and-fire embedding, and non-linear differential equation discovery based on Perona’s method.

  1. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  2. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  3. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  4. Robust 4 Camera 3D Synthetic Aperture PIV

    NASA Astrophysics Data System (ADS)

    Bajpayee, Abhishek; Techet, Alexandra

    2016-11-01

    We present novel processing techniques which allow for robust 4 camera 3D synthetic aperture (SA) PIV. These pre and post processing techniques, applied to raw images and reconstructed volumes, significantly improve SA reconstruction SNR values and consequently allow for accurate SAPIV velocity fields. SA, or light field, PIV has typically required 8 or 9 cameras in order to achieve high reconstruction quality and velocity field reconstruction quality values, Q and Qv respectively. This is primarily because the effective signal to noise ratio (SNR) of refocused images, when using traditional multiplicative or additive refocusing techniques, increases with the number of cameras being used. However, tomographic reconstruction (used with TomoPIV), is able to achieve relatively high SNR reconstructions using 4 or 5 cameras owing to its iterative but significantly more computationally expensive algorithm. Our processing techniques facilitate better recovery of relevant information in SA reconstructions using only 4 views. As a result, we no longer have to trade setup cost and complexity (number of cameras) for computational speed of the reconstruction algorithm.

  5. Action selection for single-camera SLAM.

    PubMed

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map.

  6. Path Pascal

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Kolstad, R. B.; Holle, D. F.; Miller, T. J.; Krause, P.; Horton, K.; Macke, T.

    1983-01-01

    Path Pascal is high-level experimental programming language based on PASCAL, which incorporates extensions for systems and real-time programming. Pascal is extended to treat real-time concurrent systems.

  7. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  8. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  9. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  10. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  11. Those Nifty Digital Cameras!

    ERIC Educational Resources Information Center

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  12. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  13. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  16. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  17. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  18. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  19. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  20. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  1. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  2. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1984-09-28

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (uv to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 keV x-rays.

  3. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1989-03-21

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras is disclosed. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1,000 KeV x-rays. 3 figs.

  4. An Exploratory [Silver] Path to Interagency Reconstruction

    DTIC Science & Technology

    2011-05-31

    adequate and functioning, provides opportunity for both individuals and the government to get beyond basic survival . For example, dependable power...essential services beyond the bare minimum to support humanitarian survival concerns. Second, the operational concept and its details can serve as a...with technology, such as an iPod Touch-like device that has an interactive app to not only walk a young combat engineer through a water treatment

  5. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  6. Streak camera meeting summary

    SciTech Connect

    Dolan, Daniel H.; Bliss, David E.

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  7. Discrete algebraic reconstruction technique: a new approach for superresolution reconstruction of license plates

    NASA Astrophysics Data System (ADS)

    Zarei Zefreh, Karim; van Aarle, Wim; Batenburg, K. Joost; Sijbers, Jan

    2013-10-01

    A new superresolution algorithm is proposed to reconstruct a high-resolution license plate image from a set of low-resolution camera images. The reconstruction methodology is based on the discrete algebraic reconstruction technique (DART), a recently developed reconstruction method. While DART has already been successfully applied in tomographic imaging, it has not yet been transferred to the field of camera imaging. DART is introduced for camera imaging through a demonstration of how prior knowledge of the colors of the license plate can be directly exploited during the reconstruction of a high-resolution image from a set of low-resolution images. Simulation experiments show that DART can reconstruct images with superior quality compared to conventional reconstruction methods.

  8. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  9. Digital Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, Samuel D.; Yeates, Herbert D.

    1993-01-01

    Digital electronic still camera part of electronic recording, processing, tansmitting, and displaying system. Removable hard-disk drive in camera serves as digital electronic equivalent of photographic film. Images viewed, analyzed, or transmitted quickly. Camera takes images of nearly photographic quality and stores them in digital form. Portable, hand-held, battery-powered unit designed for scientific use. Camera used in conjunction with playback unit also serving as transmitting unit if images sent to remote station. Remote station equipped to store, process, and display images. Digital image data encoded with error-correcting code at playback/transmitting unit for error-free transmission to remote station.

  10. LSST Camera Optics Design

    SciTech Connect

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  11. Digital camera simulation.

    PubMed

    Farrell, Joyce E; Catrysse, Peter B; Wandell, Brian A

    2012-02-01

    We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. We show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, we can quantify the effects of individual digital camera components on system performance and image quality. This computational approach can be helpful for both camera design and image quality assessment.

  12. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  13. Opportunity's Path

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This Long Term Planning graphic was created from a mosaic of navigation camera images overlain by a polar coordinate grid with the center point as Opportunity's original landing site. The blue dots represent the rover position at various locations.

    The red dots represent the center points of the target areas for the instruments on the rover mast (the panoramic camera and miniature thermal emission spectrometer). Opportunity visited Stone Mountain on Feb. 5. Stone Mountain was named after the southernmost point of the Appalachian Mountains outside of Atlanta, Ga. On Earth, Stone Mountain is the last big mountain before the Piedmont flatlands, and on Mars, Stone Mountain is at one end of Opportunity Ledge. El Capitan is a target of interest on Mars named after the second highest peak in Texas in Guadaloupe National Park, which is one of the most visited outcrops in the United States by geologists. It has been a training ground for students and professional geologists to understand what the layering means in relation to the formation of Earth, and scientists will study this prominent point of Opportunity Ledge to understand what the layering means on Mars.

    The yellow lines show the midpoint where the panoramic camera has swept and will sweep a 120-degree area from the three waypoints on the tour of the outcrop. Imagine a fan-shaped wedge from left to right of the yellow line.

    The white contour lines are one meter apart, and each drive has been roughly about 2-3 meters in length over the last few sols. The large white blocks are dropouts in the navigation camera data.

    Opportunity is driving along and taking a photographic panorama of the entire outcrop. Scientists will stitch together these images and use the new mosaic as a 'base map' to decide on geology targets of interest for a more detailed study of the outcrop using the instruments on the robotic arm. Once scientists choose their targets of interest, they plan to study the outcrop for roughly five to

  14. Spectral characterization of an ophthalmic fundus camera

    NASA Astrophysics Data System (ADS)

    Miller, Clayton T.; Bassi, Carl J.; Brodsky, Dale; Holmes, Timothy

    2010-02-01

    A fundus camera is an optical system designed to illuminate and image the retina while minimizing stray light and backreflections. Modifying such a device requires characterization of the optical path in order to meet the new design goals and avoid introducing problems. This work describes the characterization of one system, the Topcon TRC-50F, necessary for converting this camera from film photography to spectral imaging with a CCD. This conversion consists of replacing the camera's original xenon flash tube with a monochromatic light source and the film back with a CCD. A critical preliminary step of this modification is determining the spectral throughput of the system, from source to sensor, and ensuring there are sufficient photons at the sensor for imaging. This was done for our system by first measuring the transmission efficiencies of the camera's illumination and imaging optical paths with a spectrophotometer. Combining these results with existing knowledge of the eye's reflectance, a relative sensitivity profile is developed for the system. Image measurements from a volunteer were then made using a few narrowband sources of known power and a calibrated CCD. With these data, a relationship between photoelectrons/pixel collected at the CCD and narrowband illumination source power is developed.

  15. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  16. Object detection with single camera stereo

    NASA Astrophysics Data System (ADS)

    McBride, J.; Snorrason, M.; Eaton, R.; Checka, N.; Reiter, A.; Foil, G.; Stevens, M. R.

    2006-05-01

    Many fielded mobile robot systems have demonstrated the importance of directly estimating the 3D shape of objects in the robot's vicinity. The most mature solutions available today use active laser scanning or stereo camera pairs, but both approaches require specialized and expensive sensors. In prior publications, we have demonstrated the generation of stereo images from a single very low-cost camera using structure from motion (SFM) techniques. In this paper we demonstrate the practical usage of single-camera stereo in real-world mobile robot applications. Stereo imagery tends to produce incomplete 3D shape reconstructions of man-made objects because of smooth/glary regions that defeat stereo matching algorithms. We demonstrate robust object detection despite such incompleteness through matching of simple parameterized geometric models. Results are presented where parked cars are detected, and then recognized via license plate recognition, all in real time by a robot traveling through a parking lot.

  17. Mechanical Design of the LSST Camera

    SciTech Connect

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; Ku, John; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  18. Ice and thermal cameras for stream flow observations

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Petroselli, Andrea; Grimaldi, Salvatore

    2016-04-01

    Flow measurements are instrumental to establish discharge rating curves and to enable flood risk forecast. Further, they are crucial to study erosion dynamics and to comprehend the organization of drainage networks in natural catchments. Flow observations are typically executed with intrusive instrumentation, such as current meters or acoustic devices. Alternatively, non-intrusive instruments, such as radars and microwave sensors, are applied to estimate surface velocity. Both approaches enable flow measurements over areas of limited extent, and their implementation can be costly. Optical methods, such as large scale particle image velocimetry, have proved beneficial for non-intrusive and spatially-distributed environmental monitoring. In this work, a novel optical-based approach is utilized for surface flow velocity observations based on the combined use of a thermal camera and ice dices. Different from RGB imagery, thermal images are relatively unaffected by illumination conditions and water reflections. Therefore, such high-quality images allow to readily identify and track tracers against the background. Further, the optimal environmental compatibility of ice dices and their relative ease of preparation and storage suggest that the technique can be easily implemented to rapidly characterize surface flows. To demonstrate the validity of the approach, we present a set of experiments performed on the Brenta stream, Italy. In the experimental setup, the axis of the camera is maintained perpendicular with respect to the water surface to circumvent image orthorectification through ground reference points. Small amounts of ice dices are deployed onto the stream water surface during image acquisition. Particle tracers' trajectories are reconstructed off-line by analyzing thermal images with a particle tracking velocimetry (PTV) algorithm. Given the optimal visibility of the tracers and their low seeding density, PTV allows for efficiently following tracers' paths in

  19. CCD Luminescence Camera

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  20. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  1. Compact Solar Camera.

    ERIC Educational Resources Information Center

    Juergens, Albert

    1980-01-01

    Describes a compact solar camera built as a one-semester student project. This camera is used for taking pictures of the sun and moon and for direct observation of the image of the sun on a screen. (Author/HM)

  2. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  3. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  4. Speckle-interferometric camera for displacement measurements

    NASA Astrophysics Data System (ADS)

    Klumpp, P. A.; Schnack, E.

    1990-12-01

    A simple setup of standard optical elements comparable to a shearographic camera can be used used to record speckle interferograms with a fast lens. Rigid-body translations of the object are compensated for by a reference mirror attached to the object; the method requires only moderate stability and resolution of the storage medium (film). Interferogram reconstruction is possible with white light. Hence the method unites advances of different holographic and speckle-interferometric setups.

  5. Structured light optical microscopy for three-dimensional reconstruction of technical surfaces

    NASA Astrophysics Data System (ADS)

    Kettel, Johannes; Reinecke, Holger; Müller, Claas

    2016-04-01

    In microsystems technology quality control of micro structured surfaces with different surface properties is playing an ever more important role. The process of quality control incorporates three-dimensional (3D) reconstruction of specularand diffusive reflecting technical surfaces. Due to the demand on high measurement accuracy and data acquisition rates, structured light optical microscopy has become a valuable solution to solve this problem providing high vertical and lateral resolution. However, 3D reconstruction of specular reflecting technical surfaces still remains a challenge to optical measurement principles. In this paper we present a measurement principle based on structured light optical microscopy which enables 3D reconstruction of specular- and diffusive reflecting technical surfaces. It is realized using two light paths of a stereo microscope equipped with different magnification levels. The right optical path of the stereo microscope is used to project structured light onto the object surface. The left optical path is used to capture the structured illuminated object surface with a camera. Structured light patterns are generated by a Digital Light Processing (DLP) device in combination with a high power Light Emitting Diode (LED). Structured light patterns are realized as a matrix of discrete light spots to illuminate defined areas on the object surface. The introduced measurement principle is based on multiple and parallel processed point measurements. Analysis of the measured Point Spread Function (PSF) by pattern recognition and model fitting algorithms enables the precise calculation of 3D coordinates. Using exemplary technical surfaces we demonstrate the successful application of our measurement principle.

  6. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  7. Radiometric calibration for MWIR cameras

    NASA Astrophysics Data System (ADS)

    Yang, Hyunjin; Chun, Joohwan; Seo, Doo Chun; Yang, Jiyeon

    2012-06-01

    Korean Multi-purpose Satellite-3A (KOMPSAT-3A), which weighing about 1,000 kg is scheduled to be launched in 2013 and will be located at a sun-synchronous orbit (SSO) of 530 km in altitude. This is Korea's rst satellite to orbit with a mid-wave infrared (MWIR) image sensor, which is currently being developed at Korea Aerospace Research Institute (KARI). The missions envisioned include forest re surveillance, measurement of the ocean surface temperature, national defense and crop harvest estimate. In this paper, we shall explain the MWIR scene generation software and atmospheric compensation techniques for the infrared (IR) camera that we are currently developing. The MWIR scene generation software we have developed taking into account sky thermal emission, path emission, target emission, sky solar scattering and ground re ection based on MODTRAN data. Here, this software will be used for generating the radiation image in the satellite camera which requires an atmospheric compensation algorithm and the validation of the accuracy of the temperature which is obtained in our result. Image visibility restoration algorithm is a method for removing the eect of atmosphere between the camera and an object. This algorithm works between the satellite and the Earth, to predict object temperature noised with the Earth's atmosphere and solar radiation. Commonly, to compensate for the atmospheric eect, some softwares like MODTRAN is used for modeling the atmosphere. Our algorithm doesn't require an additional software to obtain the surface temperature. However, it needs to adjust visibility restoration parameters and the precision of the result still should be studied.

  8. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  9. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  11. The Mars observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Soulanille, T.; Ravine, M.

    1987-01-01

    A camera designed to operate under the extreme constraints of the Mars Observer Mission was selected by NASA in April, 1986. Contingent upon final confirmation in mid-November, the Mars Observer Camera (MOC) will begin acquiring images of the surface and atmosphere of Mars in September-October 1991. The MOC incorporates both a wide angle system for low resolution global monitoring and intermediate resolution regional targeting, and a narrow angle system for high resolution selective surveys. Camera electronics provide control of image clocking and on-board, internal editing and buffering to match whatever spacecraft data system capabilities are allocated to the experiment. The objectives of the MOC experiment follow.

  12. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  13. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  14. Do speed cameras reduce collisions?

    PubMed

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  15. A distributed topological camera network representation for tracking applications.

    PubMed

    Lobaton, Edgar; Vasudevan, Ramanarayan; Bajcsy, Ruzena; Sastry, Shankar

    2010-10-01

    Sensor networks have been widely used for surveillance, monitoring, and tracking. Camera networks, in particular, provide a large amount of information that has traditionally been processed in a centralized manner employing a priori knowledge of camera location and of the physical layout of the environment. Unfortunately, these conventional requirements are far too demanding for ad-hoc distributed networks. In this article, we present a simplicial representation of a camera network called the camera network complex ( CN-complex), that accurately captures topological information about the visual coverage of the network. This representation provides a coordinate-free calibration of the sensor network and demands no localization of the cameras or objects in the environment. A distributed, robust algorithm, validated via two experimental setups, is presented for the construction of the representation using only binary detection information. We demonstrate the utility of this representation in capturing holes in the coverage, performing tracking of agents, and identifying homotopic paths.

  16. Advanced CCD camera developments

    SciTech Connect

    Condor, A.

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  17. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  18. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  19. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  20. Neutron cameras for ITER

    SciTech Connect

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  1. Tomographic reconstruction of an aerosol plume using passive multiangle observations from the MISR satellite instrument

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Davis, Anthony B.; Diner, David J.

    2016-12-01

    We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.

  2. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  3. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  4. The VISTA IR camera

    NASA Astrophysics Data System (ADS)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  5. THE DARK ENERGY CAMERA

    SciTech Connect

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J.; Honscheid, K.; Abbott, T. M. C.; Bonati, M.; Antonik, M.; Brooks, D.; Ballester, O.; Cardiel-Sas, L.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Boprie, D.; Campa, J.; Castander, F. J.; Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  6. The Dark Energy Camera

    SciTech Connect

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  7. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  8. The Dark Energy Camera

    DOE PAGES

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  9. Neutron counting with cameras

    SciTech Connect

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  10. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  11. Cloud photogrammetry with dense stereo for fisheye cameras

    NASA Astrophysics Data System (ADS)

    Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens

    2016-11-01

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

  12. Synchronizing A Television Camera With An External Reference

    NASA Technical Reports Server (NTRS)

    Rentsch, Edward M.

    1993-01-01

    Improvement in genlock subsystem consists in incorporation of controllable delay circuit into path of composite synchronization signal obtained from external video source. Delay circuit helps to eliminate potential jitter in video display and ensures setup requirements for digital timing circuits of video camera satisfied.

  13. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  14. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  15. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  16. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  17. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  18. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  19. Laser Range Camera Modeling

    SciTech Connect

    Storjohann, K.

    1990-01-01

    This paper describes an imaging model that was derived for use with a laser range camera (LRC) developed by the Advanced Intelligent Machines Division of Odetics. However, this model could be applied to any comparable imaging system. Both the derivation of the model and the determination of the LRC's intrinsic parameters are explained. For the purpose of evaluating the LRC's extrinsic parameters, i.e., its external orientation, a transformation of the LRC's imaging model into a standard camera's (SC) pinhole model is derived. By virtue of this transformation, the evaluation of the LRC's external orientation can be found by applying any SC calibration technique.

  20. Calibration of Low Cost RGB and NIR Uav Cameras

    NASA Astrophysics Data System (ADS)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  1. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  2. Development of X-ray CCD camera based X-ray micro-CT system

    NASA Astrophysics Data System (ADS)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  3. Development of X-ray CCD camera based X-ray micro-CT system.

    PubMed

    Sarkar, Partha S; Ray, N K; Pal, Manoj K; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y; Sinha, A; Gadkari, S C

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  4. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  5. Camera calibration correction in shape from inconsistent silhouette

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  6. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  7. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  8. Anger Camera Firmware

    SciTech Connect

    2010-11-19

    The firmware is responsible for the operation of Anger Camera Electronics, calculation of position, time of flight and digital communications. It provides a first stage analysis of 48 signals from 48 analog signals that have been converted to digital values using A/D convertors.

  9. Mars Observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  10. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  11. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  12. Advanced Virgo phase cameras

    NASA Astrophysics Data System (ADS)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  13. Communities, Cameras, and Conservation

    ERIC Educational Resources Information Center

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  14. The LSST Camera Overview

    SciTech Connect

    Gilmore, Kirk; Kahn, Steven A.; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe; /SLAC

    2007-01-10

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100 C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  15. Ultraminiature television camera

    NASA Technical Reports Server (NTRS)

    Deterville, R. J.; Drago, N.

    1967-01-01

    Ultraminiature television camera with a total volume of 20.25 cubic inches, requires 28 vdc power, operates on UHF and accommodates standard 8-mm optics. It uses microelectronic assembly packaging techniques and contains a magnetically deflected and electrostatically focused vidicon, automatic gain control circuit, power supply, and transmitter.

  16. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  17. Etalon Array Reconstructive Spectrometry

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2017-01-01

    Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed.

  18. Etalon Array Reconstructive Spectrometry

    PubMed Central

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2017-01-01

    Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed. PMID:28074883

  19. The PAU Camera

    NASA Astrophysics Data System (ADS)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  20. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  1. Do Speed Cameras Reduce Collisions?

    PubMed Central

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions. PMID:24406979

  2. Voice Controlled Stereographic Video Camera System

    NASA Astrophysics Data System (ADS)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  3. Practical intraoperative stereo camera calibration.

    PubMed

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  4. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  5. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  6. Photometric Lunar Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  7. Versatility of the CFR (Constrained Fourier Reconstruction) algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1989-08-01

    The Constrained Fourier Reconstruction (CFR) algorithm and the Iterative Reconstruction-Reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The CFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant. 3 refs., 5 figs.

  8. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  9. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  10. LSST Camera Optics

    SciTech Connect

    Olivier, S S; Seppala, L; Gilmore, K; Hale, L; Whistler, W

    2006-06-05

    The Large Synoptic Survey Telescope (LSST) is a unique, three-mirror, modified Paul-Baker design with an 8.4m primary, a 3.4m secondary, and a 5.0m tertiary feeding a camera system that includes corrector optics to produce a 3.5 degree field of view with excellent image quality (<0.3 arcsecond 80% encircled diffracted energy) over the entire field from blue to near infra-red wavelengths. We describe the design of the LSST camera optics, consisting of three refractive lenses with diameters of 1.6m, 1.0m and 0.7m, along with a set of interchangeable, broad-band, interference filters with diameters of 0.75m. We also describe current plans for fabricating, coating, mounting and testing these lenses and filters.

  11. Combustion pinhole camera system

    DOEpatents

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  12. Combustion pinhole camera system

    DOEpatents

    Witte, A.B.

    1984-02-21

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.

  13. Hemispherical Laue camera

    DOEpatents

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  14. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  15. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  16. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  17. Compact and robust hyperspectral camera based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Denk, O.; Hlubuček, J.; Václavík, J.

    2016-11-01

    Spectrum of light which is emitted or reflected by an object carries immense amount of information about the object. A simple piece of evidence is the importance of color sensing for human vision. Combining an image acquisition with efficient measurement of light spectra for each detected pixel is therefore one of the important issues in imaging, referred as hyperspectral imaging. We demonstrate a construction of a compact and robust hyperspectral camera for the visible and near-IR spectral region. The camera was designed vastly based on off-shelf optics, yet an extensive optimization and addition of three customized parts enabled construction of the camera featuring a low f-number (F/3.9) and fully concentric optics. We employ a novel approach of compressed sensing (namely coded aperture snapshot spectral imaging, abbrev. CASSI). The compressed sensing enables to computationally extract an encoded hyperspectral information from a single camera exposition. Owing to the technique the camera lacks any moving or scanning part, while it can record the full image and spectral information in a single snapshot. Moreover, unlike the commonly used compressed sensing table-top apparatuses, the camera represents a portable device able to work outside a lab. We demonstrate the spectro-temporal reconstruction of recorded scenes based on 90×90 random matrix encoding. Finally, we discuss potential of the compressed sensing in hyperspectral camera.

  18. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  19. Lights, Camera, Reflection!

    ERIC Educational Resources Information Center

    Mourlam, Daniel

    2013-01-01

    There are many ways to critique teaching, but few are more effective than video. Personal reflection through the use of video allows one to see what really happens in the classrooms--good and bad--and provides a visual path forward for improvement, whether it be in one's teaching, work with a particular student, or learning environment. This…

  20. Efficacy of novel robotic camera vs a standard laparoscopic camera.

    PubMed

    Strong, Vivian E M; Hogle, Nancy J; Fowler, Dennis L

    2005-12-01

    To improve visualization during minimal access surgery, a novel robotic camera has been developed. The prototype camera is totally insertable, has 5 degrees of freedom, and is remotely controlled. This study compared the performance of laparoscopic surgeons using both a laparoscope and the robotic camera. The MISTELS (McGill Inanimate System for the Training and Evaluation of Laparoscopic Skill) tasks were used to test six laparoscopic fellows and attending surgeons. Half the surgeons used the laparoscope first and half used the robotic camera first. Total scores from the MISTELS sessions in which the laparoscope was used were compared with the sessions in which the robotic camera was used and then analyzed with a paired t test (P < .05 was considered significant). All six surgeons tested showed no significant difference in their MISTELS task performance on the robotic camera compared with the standard laparoscopic camera. The mean MISTELS score of 963 for all subjects who used a laparoscope and camera was not significantly different than the mean score of 904 for the robotic camera (P = .17). This new robotic camera prototype allows for equivalent performance on a validated laparoscopic assessment tool when compared with performance using a standard laparoscope.

  1. Measuring SO2 ship emissions with an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  2. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  3. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  4. Universal ICT Picosecond Camera

    NASA Astrophysics Data System (ADS)

    Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.

    1989-06-01

    The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or

  5. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  6. Porcelain three-dimensional shape reconstruction and its color reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoyang; Wu, Haibin; Yang, Xue; Yu, Shuang; Wang, Beiyi; Chen, Deyun

    2013-01-01

    In this paper, structured light three-dimensional measurement technology was used to reconstruct the porcelain shape, and further more the porcelain color was reconstructed. So the accurate reconstruction of the shape and color of porcelain was realized. Our shape measurement installation drawing is given. Because the porcelain surface is color complex and highly reflective, the binary Gray code encoding is used to reduce the influence of the porcelain surface. The color camera was employed to obtain the color of the porcelain surface. Then, the comprehensive reconstruction of the shape and color was realized in Java3D runtime environment. In the reconstruction process, the space point by point coloration method is proposed and achieved. Our coloration method ensures the pixel corresponding accuracy in both of shape and color aspects. The porcelain surface shape and color reconstruction experimental results completed by proposed method and our installation, show that: the depth range is 860 ˜ 980mm, the relative error of the shape measurement is less than 0.1%, the reconstructed color of the porcelain surface is real, refined and subtle, and has the same visual effect as the measured surface.

  7. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  8. Accuracy of 3-D reconstruction with occlusions.

    PubMed

    Begon, Mickaël; Lacouture, Patrick

    2010-02-01

    A marker has to be seen by at least two cameras for its three-dimensional (3-D) reconstruction, and the accuracy can be improved with more cameras. However, a change in the set of cameras used in the reconstruction can alter the kinematics. The purpose of this study was to quantify the harmful effect of occlusions on two-dimensional (2-D) images and to make recommendations about the signal processing. A reference kinematics data set was collected for a three degree-of-freedom linkage with three cameras of a commercial motion analysis system without any occlusion on the 2-D images. In the 2-D images, some occlusions were artificially created based on trials of real cyclic motions. An interpolation of 2-D trajectories before the 3-D reconstruction and two filters (Savitsky-Golay and Butterworth filters) after reconstruction were successively applied to minimize the effect of the 2-D occlusions. The filter parameters were optimized by minimizing the root mean square error between the reference and the filtered data. The optimal parameters of the filters were marker dependent, whereas no filter was necessary after a 2-D interpolation. As the occlusions cause systematic error in the 3-D reconstruction, the interpolation of the 2-D trajectories is more appropriate than filtering the 3-D trajectories.

  9. PAU camera: detectors characterization

    NASA Astrophysics Data System (ADS)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  10. Stereoscopic camera design

    NASA Astrophysics Data System (ADS)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  11. NFC - Narrow Field Camera

    NASA Astrophysics Data System (ADS)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  12. HONEY -- The Honeywell Camera

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  13. Penile Reconstruction

    PubMed Central

    Salgado, Christopher J.; Chim, Harvey; Tang, Jennifer C.; Monstrey, Stan J.; Mardini, Samir

    2011-01-01

    A variety of surgical options exists for penile reconstruction. The key to success of therapy is holistic management of the patient, with attention to the psychological aspects of treatment. In this article, we review reconstructive modalities for various types of penile defects inclusive of partial and total defects as well as the buried penis, and also describe recent basic science advances, which may promise new options for penile reconstruction. PMID:22851914

  14. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  15. Fast frame scanning camera system for light-sheet microscopy.

    PubMed

    Wu, Di; Zhou, Xing; Yao, Baoli; Li, Runze; Yang, Yanlong; Peng, Tong; Lei, Ming; Dan, Dan; Ye, Tong

    2015-10-10

    In the interest of improving the temporal resolution for light-sheet microscopy, we designed a fast frame scanning camera system that incorporated a galvanometer scanning mirror into the imaging path of a home-built light-sheet microscope. This system transformed a temporal image sequence to a spatial one so that multiple images could be acquired during one exposure period. The improvement factor of the frame rate was dependent on the number of sub-images that could be tiled on the sensor without overlapping each other and was therefore a trade-off with the image size. As a demonstration, we achieved 960 frames/s (fps) on a CCD camera that was originally capable of recording images at only 30 fps (full frame). This allowed us to observe millisecond or sub-millisecond events with ordinary CCD cameras.

  16. Penile reconstruction

    PubMed Central

    Garaffa, Giulio; Sansalone, Salvatore; Ralph, David J

    2013-01-01

    During the most recent years, a variety of new techniques of penile reconstruction have been described in the literature. This paper focuses on the most recent advances in male genital reconstruction after trauma, excision of benign and malignant disease, in gender reassignment surgery and aphallia with emphasis on surgical technique, cosmetic and functional outcome. PMID:22426595

  17. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  18. Oblique along path toward structures at rear of parcel. Original ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique along path toward structures at rear of parcel. Original skinny mosaic path along edge of structures was altered (delineation can be seen in concrete) path was widened with a newer mosaic to make access to the site safer. Structures (from right) edge of Round House (with "Spring Garden"), Pencil house, Shell House, School House, wood lattice is attached to chain-link fence along north (rear) property line. These structures were all damaged by the 1994 Northridge earthquake. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  19. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  20. Automation of a Guinier camera for X-ray diffraction

    NASA Astrophysics Data System (ADS)

    Duijn, Jozef Henricus

    A Guinie camera was equipped with a curved proportional counter to allow fast recording of diffraction patterns. The focusing principles are discussed and the optimum dimensions of the proportional counter are determined. Measurements on a counter prototype are discussed. A simplified readout method is introduced. In order to reconstruct the position of absorption of an incident X-ray, the charge distribution on the cathode strips of the counter is measured. The results are compared with computed charge distributions. A protocol which corrects the systematic errors introduced by the charge ratio reconstruction method is presented.

  1. Dynamic Human Body Modeling Using a Single RGB Camera.

    PubMed

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  2. Dynamic Human Body Modeling Using a Single RGB Camera

    PubMed Central

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  3. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  4. Auto-preview camera orientation for environment perception on a mobile robot

    NASA Astrophysics Data System (ADS)

    Radovnikovich, Micho; Vempaty, Pavan K.; Cheok, Ka C.

    2010-01-01

    Using wide-angle or omnidirectional camera lenses to increase a mobile robot's field of view introduces nonlinearity in the image due to the 'fish-eye' effect. This complicates distance perception, and increases image processing overhead. Using multiple cameras avoids the fish-eye complications, but involves using more electrical and processing power to interface them to a computer. Being able to control the orientation of a single camera, both of these disadvantages are minimized while still allowing the robot to preview a wider area. In addition, controlling the orientation allows the robot to optimize its environment perception by only looking where the most useful information can be discovered. In this paper, a technique is presented that creates a two dimensional map of objects of interest surrounding a mobile robot equipped with a panning camera on a telescoping shaft. Before attempting to negotiate a difficult path planning situation, the robot takes snapshots at different camera heights and pan angles and then produces a single map of the surrounding area. Distance perception is performed by making calibration measurements of the camera and applying coordinate transformations to project the camera's findings into the vehicle's coordinate frame. To test the system, obstacles and lines were placed to form a chicane. Several snapshots were taken with different camera orientations, and the information from each were stitched together to yield a very useful map of the surrounding area for the robot to use to plan a path through the chicane.

  5. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  6. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  7. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  8. The "All Sky Camera Network"

    ERIC Educational Resources Information Center

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  9. SEOS frame camera applications study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.

  10. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  11. A method for measuring aircraft height and velocity using dual television cameras

    NASA Technical Reports Server (NTRS)

    Young, W. R.

    1977-01-01

    A unique electronic optical technique, consisting of two closed circuit television cameras and timing electronics, was devised to measure an aircraft's horizontal velocity and height above ground without the need for airborne cooperative devices. The system is intended to be used where the aircraft has a predictable flight path and a height of less than 660 meters (2,000 feet) at or near the end of an air terminal runway, but is suitable for greater aircraft altitudes whenever the aircraft remains visible. Two television cameras, pointed at zenith, are placed in line with the expected path of travel of the aircraft. Velocity is determined by measuring the time it takes the aircraft to travel the measured distance between cameras. Height is determined by correlating this speed with the time required to cross the field of view of either camera. Preliminary tests with a breadboard version of the system and a small model aircraft indicate the technique is feasible.

  12. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  13. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  14. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  15. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  16. Ligament reconstruction.

    PubMed

    Glickel, Steven Z; Gupta, Salil

    2006-05-01

    Volar ligament reconstruction is an effective technique for treating symptomatic laxity of the CMC joint of the thumb. The laxity may bea manifestation of generalized ligament laxity,post-traumatic, or metabolic (Ehler-Danlos). There construction reduces the shear forces on the joint that contribute to the development and persistence of inflammation. Although there have been only a few reports of the results of volar ligament reconstruction, the use of the procedure to treat Stage I and Stage II disease gives good to excellent results consistently. More advanced stages of disease are best treated by trapeziectomy, with or without ligament reconstruction.

  17. CCD Camera Observations

    NASA Astrophysics Data System (ADS)

    Buchheim, Bob; Argyle, R. W.

    One night late in 1918, astronomer William Milburn, observing the region of Cassiopeia from Reverend T.H.E.C. Espin's observatory in Tow Law (England), discovered a hitherto unrecorded double star (Wright 1993). He reported it to Rev. Espin, who measured the pair using his 24-in. reflector: the fainter star was 6.0 arcsec from the primary, at position angle 162.4 ^{circ } (i.e. the fainter star was south-by-southeast from the primary) (Espin 1919). Some time later, it was recognized that the astrograph of the Vatican Observatory had taken an image of the same star-field a dozen years earlier, in late 1906. At that earlier epoch, the fainter star had been separated from the brighter one by only 4.8 arcsec, at position angle 186.2 ^{circ } (i.e. almost due south). Were these stars a binary pair, or were they just two unrelated stars sailing past each other? Some additional measurements might have begun to answer this question. If the secondary star was following a curved path, that would be a clue of orbital motion; if it followed a straight-line path, that would be a clue that these are just two stars passing in the night. Unfortunately, nobody took the trouble to re-examine this pair for almost a century, until the 2MASS astrometric/photometric survey recorded it in late 1998. After almost another decade, this amateur astronomer took some CCD images of the field in 2007, and added another data point on the star's trajectory, as shown in Fig. 15.1.

  18. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  19. Camera sensitivity study

    NASA Astrophysics Data System (ADS)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  20. Proportional counter radiation camera

    DOEpatents

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  1. The universal path integral

    NASA Astrophysics Data System (ADS)

    Lloyd, Seth; Dreyer, Olaf

    2016-02-01

    Path integrals calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness. The probabilities for events corresponding to sub-integrals can be calculated using the method of decoherent histories. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures.

  2. Detection of the optimal region of interest for camera oximetry.

    PubMed

    Karlen, Walter; Ansermino, J Mark; Dumont, Guy A; Scheffer, Cornie

    2013-01-01

    The estimation of heart rate and blood oxygen saturation with an imaging array on a mobile phone (camera oximetry) has great potential for mobile health applications as no additional hardware other than a camera and LED flash enabled phone are required. However, this approach is challenging as the configuration of the camera can negatively influence the estimation quality. Further, the number of photons recorded with the photo detector is largely dependent on the optical path length, resulting in a non-homogeneous image. In this paper we describe a novel method to automatically detect the optimal region of interest (ROI) for the captured image to extract a pulse waveform. We also present a study to select the optimal camera settings, notably the white balance. The experiments show that the incandescent white balance mode is the preferable setting for camera oximetry applications on the tested mobile phone (Samsung Galaxy Ace). Also, the ROI algorithm successfully identifies the frame regions which provide waveforms with the largest amplitudes.

  3. Thermal design and flight validation for high precision camera

    NASA Astrophysics Data System (ADS)

    Meng, Henghui; Sun, Lixia; Zhang, Chuanqiang; Geng, Liyin

    2015-10-01

    High precision camera, designed for advanced optical system, with a wide field of vision, high resolution and fast response, has a wild range of applications. As the main payload for spacecraft, the optical remote sensor is mounted exposed to the space, which means it should have a reliable optical performance in harsh space environment during lifetime. Because of the special optical characteristic, imaging path should be accurate, and less thermal deformation for the optical parts is required in the working process, so the high precision camera has a high level requirement for temperature. High resolution space camera is generally required to own the capability of adapting to space thermal environments. The flexible satellite's change of rolling attitude affects the temperature distribution of the camera and makes a difference to optical performance. The thermal control design of space camera is presented, and analysis the temperature data in orbit to prove the thermal design correct. It is proved that the rolling attitude has more influence on outer parts and less influence on inner parts, and active thermal control can weaken the influence of rolling attitude.

  4. ACL reconstruction

    MedlinePlus

    ... This increases the chance you may have a meniscus tear. ACL reconstruction may be used for these ... When other ligaments are also injured When your meniscus is torn Before surgery, talk to your health ...

  5. Video inpainting under constrained camera motion.

    PubMed

    Patwardhan, Kedar A; Sapiro, Guillermo; Bertalmío, Marcelo

    2007-02-01

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.

  6. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  7. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  8. The infrared camera onboard JEM-EUSO

    NASA Astrophysics Data System (ADS)

    Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.

    2015-11-01

    The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented.

  9. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  10. Foreground extraction for moving RGBD cameras

    NASA Astrophysics Data System (ADS)

    Junejo, Imran N.; Ahmed, Naveed

    2017-02-01

    In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.

  11. Characterizing the Evolutionary Path(s) to Early Homo

    PubMed Central

    Schroeder, Lauren; Roseman, Charles C.; Cheverud, James M.; Ackermann, Rebecca R.

    2014-01-01

    Numerous studies suggest that the transition from Australopithecus to Homo was characterized by evolutionary innovation, resulting in the emergence and coexistence of a diversity of forms. However, the evolutionary processes necessary to drive such a transition have not been examined. Here, we apply statistical tests developed from quantitative evolutionary theory to assess whether morphological differences among late australopith and early Homo species in Africa have been shaped by natural selection. Where selection is demonstrated, we identify aspects of morphology that were most likely under selective pressure, and determine the nature (type, rate) of that selection. Results demonstrate that selection must be invoked to explain an Au. africanus—Au. sediba—Homo transition, while transitions from late australopiths to various early Homo species that exclude Au. sediba can be achieved through drift alone. Rate tests indicate that selection is largely directional, acting to rapidly differentiate these taxa. Reconstructions of patterns of directional selection needed to drive the Au. africanus—Au. sediba—Homo transition suggest that selection would have affected all regions of the skull. These results may indicate that an evolutionary path to Homo without Au. sediba is the simpler path and/or provide evidence that this pathway involved more reliance on cultural adaptations to cope with environmental change. PMID:25470780

  12. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  13. Pulled Motzkin paths

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  14. Path Integrals and Hamiltonians

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2014-03-01

    1. Synopsis; Part I. Fundamental Principles: 2. The mathematical structure of quantum mechanics; 3. Operators; 4. The Feynman path integral; 5. Hamiltonian mechanics; 6. Path integral quantization; Part II. Stochastic Processes: 7. Stochastic systems; Part III. Discrete Degrees of Freedom: 8. Ising model; 9. Ising model: magnetic field; 10. Fermions; Part IV. Quadratic Path Integrals: 11. Simple harmonic oscillators; 12. Gaussian path integrals; Part V. Action with Acceleration: 13. Acceleration Lagrangian; 14. Pseudo-Hermitian Euclidean Hamiltonian; 15. Non-Hermitian Hamiltonian: Jordan blocks; 16. The quartic potential: instantons; 17. Compact degrees of freedom; Index.

  15. Hi-G electronic gated camera for precision trajectory analysis

    NASA Astrophysics Data System (ADS)

    Snyder, Donald R.; Payne, Scott; Keller, Ed; Longo, Salvatore; Caudle, Dennis E.; Walker, Dennis C.; Sartor, Mark A.; Keeler, Joe E.; Kerr, David A.; Fail, R. Wallace; Gannon, Jim; Carrol, Ernie; Jamison, Todd A.

    1997-12-01

    trajectory, timing, and advanced sensor development. This system will be used for ground tracking data reduction in support of small air vehicle and munition testing. It will provide a means of integrating the imagery and telemetry data from the item with ground based photographic support. The technique we have designed will exploit off-the-shelf software and analysis components. A differential GPS survey instrument will establish a photogrammetric calibration grid throughout the range and reference targets along the flight path. Images from the on-board sensor will be used to calibrate the ortho- rectification model in the analysis software. The projectile images will be transmitted and recorded on several tape recorders to insure complete capture of each video field. The images will be combined with a non-linear video editor into a time-correlated record. Each correlated video field will be written to video disk. The files will be converted to DMA compatible format and then analyzed for determination of the projectile altitude, attitude and position in space. The resulting data file will be used to create a photomosaic of the ground the projectile flew over and the targets it saw. The data will be then transformed to a trajectory file and used to generate a graphic overlay that will merge digital photo data of the range with actual images captured. The plan is to superimpose the flight path of the projectile, the path of the weapons aimpoint, and annotation of each internal sequence event. With tools used to produce state-of-the-art computer graphics, we now think it will be possible to reconstruct the test event from the viewpoint of the warhead, the target, and a 'God's-Eye' view looking over the shoulder of the projectile.

  16. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  17. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  18. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  19. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  20. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  1. Dark energy survey and camera

    SciTech Connect

    William Wester

    2004-08-16

    The authors describe the Dark Energy Survey and Camera. The survey will image 5000 sq. deg. in the southern sky to collect 300 million galaxies, 30,000 galaxy clusters and 2000 Type Ia supernovae. They expect to derive a value for the dark energy equation of state parameters, w, to a precision of 5% by combining four distinct measurement techniques. They describe the mosaic camera that will consist of CCDs with enhanced sensitivity in the near infrared. The camera will be mounted at the prime focus of the 4m Blanco telescope.

  2. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  3. Collision-related Early Paleozoic evolution of a crustal fragment from the northern Gondwana margin (Slavonian Mountains, Tisia Mega-Unit, Croatia): Reconstruction of the P-T path, timing and paleotectonic implications

    NASA Astrophysics Data System (ADS)

    Balen, D.; Massonne, H.-J.; Petrinec, Z.

    2015-09-01

    An orthogneiss from the oldest metamorphic complex at Mt. Papuk (Tisia Mega-Unit, Croatia) enables the quantification of the P-T evolution of Early Paleozoic rocks of the Panonian Basin basement in contrast to neighboring peri-Gondwanan terrains which are significantly overprinted by pre-Variscan, Variscan, and Alpine events. Two different groups of Ce-rich monazite within oval-shaped corona microstructures have been observed. Age dating of the corona cores yielded two populations with average ages of 528 ± 7 (2σ) Ma and 465 ± 7 Ma, respectively. Furthermore, an Y-rich group, found inside garnet cores, was dated at 616 ± 23 Ma. Th-rich monazite included in garnet rims yielded an age of 491 ± 6 Ma. The youngest monazite group at 417 ± 20 Ma is located inside mica. The orthogneiss precursor was a calc-alkaline to high-K calc-alkaline igneous peraluminous crustal rock (diorite) from an active continental marginal setting. The calculated P-T pseudosection in the MnNCKFMASHTO system in combination with assemblage characteristics and mineral chemistry data provides good constraints on the P-T evolution: for stage I peak P-T conditions of 13 kbar and 670 °C were derived followed by stage II, which was characterized by moderate cooling accompanied by uplift to mid-crustal levels (5.2 kbar and 610 °C). Subsequently, the system cooled to 480 °C at 4.4 kbar (stage III). Formation of titanite rims on ilmenite suggests further cooling to 4 kbar and 400 °C (stage IV). The clockwise P-T path implies exhumation from a tectonically thickened crustal setting (ca. 45 km depth at a geothermal gradient of 15 °C/km) to mid-crustal levels (ca. 18 km) followed by cooling at depths < 14 km. Crustal thickening was due to the collision of a continental plate (Gondwana) with a smaller plate, which was underthrust.

  4. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  5. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    PubMed Central

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  6. Depth Cameras on UAVs: a First Approach

    NASA Astrophysics Data System (ADS)

    Deris, A.; Trigonis, I.; Aravanis, A.; Stathopoulou, E. K.

    2017-02-01

    Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM) algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM) and Multiple View Stereo (MVS) pipeline for a challenging cultural heritage application.

  7. Speckle Camera Imaging of the Planet Pluto

    NASA Astrophysics Data System (ADS)

    Howell, Steve B.; Horch, Elliott P.; Everett, Mark E.; Ciardi, David R.

    2012-10-01

    We have obtained optical wavelength (692 nm and 880 nm) speckle imaging of the planet Pluto and its largest moon Charon. Using our DSSI speckle camera attached to the Gemini North 8 m telescope, we collected high resolution imaging with an angular resolution of ∼20 mas, a value at the Gemini-N telescope diffraction limit. We have produced for this binary system the first speckle reconstructed images, from which we can measure not only the orbital separation and position angle for Charon, but also the diameters of the two bodies. Our measurements of these parameters agree, within the uncertainties, with the current best values for Pluto and Charon. The Gemini-N speckle observations of Pluto are presented to illustrate the capabilities of our instrument and the robust production of high accuracy, high spatial resolution reconstructed images. We hope our results will suggest additional applications of high resolution speckle imaging for other objects within our solar system and beyond. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  8. Simulating images captured by superposition lens cameras

    NASA Astrophysics Data System (ADS)

    Thangarajan, Ashok Samraj; Kakarala, Ramakrishna

    2011-03-01

    As the demand for reduction in the thickness of cameras rises, so too does the interest in thinner lens designs. One such radical approach toward developing a thin lens is obtained from nature's superposition principle as used in the eyes of many insects. But generally the images obtained from those lenses are fuzzy, and require reconstruction algorithms to complete the imaging process. A hurdle to developing such algorithms is that the existing literature does not provide realistic test images, aside from using commercial ray-tracing software which is costly. A solution for that problem is presented in this paper. Here a Gabor Super Lens (GSL), which is based on the superposition principle, is simulated using the public-domain ray-tracing software POV-Ray. The image obtained is of a grating surface as viewed through an actual GSL, which can be used to test reconstruction algorithms. The large computational time in rendering such images requires further optimization, and methods to do so are discussed.

  9. Cylindrical holographic radar camera

    NASA Astrophysics Data System (ADS)

    McMakin, Douglas L.; Sheen, David M.; Hall, Thomas E.; Severtsen, Ronald H.

    1998-12-01

    A novel personnel surveillance system has been developed to rapidly obtain 360 degree, full-body images of humans for the detection and identification of concealed threats. Detectable threats include weapons fabricated with metal, plastic, and ceramic, as well as explosive solids and liquids. This new system uses a cylindrical mechanical scanner to move a seven-foot, 384 element, Ka band (26 - 30 GHz) array circumferentially around a person in four to seven seconds. Low power millimeter-waves, which are nonionizing and not harmful to humans, are employed because they readily penetrate clothing barriers and reflect from concealed threats. The reflected waves provide information that is reconstructed into 3-D cylindrical holographic images with high-speed, digital signal processing (DSP) boards. This system is capable of displaying in an animation format eight, sixteen, thirty-two or sixty-four image frames at various aspect angles around the person under surveillance. This new prototype surveillance system is operational and is presently under laboratory testing and evaluation.

  10. SMART-1/AMIE Camera System

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Beauvivre, S.; Cerroni, P.; de Sanctis, M. C.; Pinet, P.; Chevrel, S.; Langevin, Y.; Barucci, M. A.; Plancke, P.; Koschny, D.; Almeida, M.; Sodnik, Z.; Mancuso, S.; Hofmann, B. A.; Muinonen, K.; Shevchenko, V.; Shkuratov, Y.; Ehrenfreund, P.; Foing, B. H.

    2006-03-01

    The Advanced Moon micro-Imager Experiment (AMIE), on board ESA SMART-1, the first European mission to the Moon (launched on 27th September 2003), is a camera system with scientific, technical and public outreach oriented objectives.

  11. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  12. Distributed imaging using an array of compressive cameras

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Shankar, Premchandra; Neifeld, Mark A.

    2009-01-01

    We describe a distributed computational imaging system that employs an array of feature specific sensors, also known as compressive imagers, to directly measure the linear projections of an object. Two different schemes for implementing these non-imaging sensors are discussed. We consider the task of object reconstruction and quantify the fidelity of reconstruction using the root mean squared error (RMSE) metric. We also study the lifetime of such a distributed sensor network. The sources of energy consumption in a distributed feature specific imaging (DFSI) system are discussed and compared with those in a distributed conventional imaging (DCI) system. A DFSI system consisting of 20 imagers collecting DCT, Hadamard, or PCA features has a lifetime of 4.8× that of the DCI system when the noise level is 20% and the reconstruction RMSE requirement is 6%. To validate the simulation results we emulate a distributed computational imaging system using an experimental setup consisting of an array of conventional cameras.

  13. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  14. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  15. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  16. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  17. The virtual gamma camera room.

    PubMed

    Penrose, J M; Trowbridge, E A; Tindale, W B

    1996-05-01

    The installation of a gamma camera is time-consuming and costly and, once installed, the camera position is unlikely to be altered during its working life. Poor choice of camera position therefore has long-term consequences. Additional equipment such as collimators and carts, the operator's workstation and wall-mounted display monitors must also be situated to maximize access and ease of use. The layout of a gamma camera room can be optimized prior to installation by creating a virtual environment. Super-Scape VRT software running on an upgraded 486 PC microprocessor was used to create a 'virtual camera room'. The simulation included an operator's viewpoint and a controlled tour of the room. Equipment could be repositioned as required, allowing potential problems to be identified at the design stage. Access for bed-ridden patients, operator ergonomics, operator and patient visibility were addressed. The display can also be used for patient education. Creation of a virtual environment is a valuable tool which allows different camera systems to be compared interactively in terms of dimensions, extent of movement and use of a defined space. Such a system also has applications in radiopharmacy design and simulation.

  18. Streak camera dynamic range optimization

    SciTech Connect

    Wiedwald, J.D.; Lerche, R.A.

    1987-09-01

    The LLNL optical streak camera is used by the Laser Fusion Program in a wide range of applications. Many of these applications require a large recorded dynamic range. Recent work has focused on maximizing the dynamic range of the streak camera recording system. For our streak cameras, image intensifier saturation limits the upper end of the dynamic range. We have developed procedures to set the image intensifier gain such that the system dynamic range is maximized. Specifically, the gain is set such that a single streak tube photoelectron is recorded with an exposure of about five times the recording system noise. This ensures detection of single photoelectrons, while not consuming intensifier or recording system dynamic range through excessive intensifier gain. The optimum intensifier gain has been determined for two types of film and for a lens-coupled CCD camera. We have determined that by recording the streak camera image with a CCD camera, the system is shot-noise limited up to the onset of image intensifier nonlinearity. When recording on film, the film determines the noise at high exposure levels. There is discussion of the effects of slit width and image intensifier saturation on dynamic range. 8 refs.

  19. The camera convergence problem revisited

    NASA Astrophysics Data System (ADS)

    Allison, Robert S.

    2004-05-01

    Convergence of the real or virtual stereoscopic cameras is an important operation in stereoscopic display systems. For example, convergence can shift the range of portrayed depth to improve visual comfort; can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict; or can bring objects of interest into the binocular field-of-view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or 'toe-in' distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviorally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. We ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images.

  20. The determination of the intrinsic and extrinsic parameters of virtual camera based on OpenGL

    NASA Astrophysics Data System (ADS)

    Li, Suqi; Zhang, Guangjun; Wei, Zhenzhong

    2006-11-01

    OpenGL is the international standard of 3D image. The 3D image generation by OpenGL is similar to the shoot by camera. This paper focuses on the application of OpenGL to computer vision, the OpenGL 3D image is regarded as virtual camera image. Firstly, the imaging mechanism of OpenGL has been analyzed in view of perspective projection transformation of computer vision camera. Then, the relationship between intrinsic and extrinsic parameters of camera and function parameters in OpenGL has been analysed, the transformation formulas have been deduced. Thereout the computer vision simulation has been realized. According to the comparison between the actual CCD camera images and virtual camera images(the parameters of actual camera are the same as virtual camera's) and the experiment results of stereo vision 3D reconstruction simulation, the effectiveness of the method with which the intrinsic and extrinsic parameters of virtual camera based on OpenGL are determined has been verified.

  1. Use of spectral characteristics of DSLR cameras with Bayer filter sensors

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Lesnichii, V. V.; Petrov, N. V.

    2014-09-01

    The expensive photosensors of scientific cameras are commonly used in the wide variety of research fields. However, photosensors that are implemented in DSLR cameras are seen to be an appropriate substitution in order to decrease price/quality ratio or even receive additional features. In this article different scientific applications of DSLR cameras' photosensors with Bayer filter as well as calibration methods of its spectral characteristics are discussed. The approach, based on determination of latter and usage of its features, is shown to increase SNR of the color reconstructed images in digital holography.

  2. An industrial light-field camera applied for 3D velocity measurements in a slot jet

    NASA Astrophysics Data System (ADS)

    Seredkin, A. V.; Shestakov, M. V.; Tokarev, M. P.

    2016-10-01

    Modern light-field cameras have found their application in different areas like photography, surveillance and quality control in industry. A number of studies have been reported relatively low spatial resolution of 3D profiles of registered objects along the optical axis of the camera. This article describes a method for 3D velocity measurements in fluid flows using an industrial light-field camera and an alternative reconstruction algorithm based on a statistical approach. This method is more accurate than triangulation when applied for tracking small registered objects like tracer particles in images. The technique was used to measure 3D velocity fields in a turbulent slot jet.

  3. Measurements of the performance of the light mixing chambers in the mixel camera.

    PubMed

    Fridman, Andrei; Høye, Gudrun

    2015-05-18

    Spectral data acquired with traditional push-broom hyperspectral cameras may be significantly distorted due to spatial misregistration such as keystone. The mixel camera is a new type of push-broom hyperspectral camera, where an image recorded with arbitrary (even large) keystone is reconstructed to a nearly keystone-free image. The key component of the mixel camera is an array of light mixing chambers in the slit plane, and the precision of the image reconstruction depends on the light mixing properties of these chambers. In this work we describe how these properties were measured in a mixel camera prototype. We also investigate the potential performance of the mixel camera in terms of spatial co-registration, based on the measured response of the mixing chambers to a point source. The results suggest that, with the current chambers, a perfectly characterized mixel camera should have residual spatial misregistration that is equivalent to 0.02-0.03 pixels keystone. This compares favorably to high resolution instruments where keystone is corrected in hardware or by resampling.

  4. The MC and LFC cameras. [metric camera (MC); large format camera (LFC)

    NASA Technical Reports Server (NTRS)

    Norton, Clarice L.; Schroeder, Manfried; Mollberg, Bernard

    1986-01-01

    The characteristics of the shuttle-borne Large Format Camera are listed. The LFC focal plane format was 23 by 46 cm, double the usual size, thereby acquiring approximately double the ground area. Forward motion compensation was employed. With the stable platform (shuttle) it was possible to use the slow exposure, high resolution, Kodak aerial films; 3414 and 3412 black and white, SO-242 color, and SO-131 aerochrome infrared. The camera was designed to maintain stability during varying temperature extremes of space.

  5. Project Reconstruct.

    ERIC Educational Resources Information Center

    Helisek, Harriet; Pratt, Donald

    1994-01-01

    Presents a project in which students monitor their use of trash, input and analyze information via a database and computerized graphs, and "reconstruct" extinct or endangered animals from recyclable materials. The activity was done with second-grade students over a period of three to four weeks. (PR)

  6. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  7. ACL reconstruction - discharge

    MedlinePlus

    Anterior cruciate ligament reconstruction - discharge; ACL reconstruction - discharge ... had surgery to reconstruct your anterior cruciate ligament (ACL). The surgeon drilled holes in the bones of ...

  8. A testbed for wide-field, high-resolution, gigapixel-class cameras

    NASA Astrophysics Data System (ADS)

    Kittle, David S.; Marks, Daniel L.; Son, Hui S.; Kim, Jungsang; Brady, David J.

    2013-05-01

    The high resolution and wide field of view (FOV) of the AWARE (Advanced Wide FOV Architectures for Image Reconstruction and Exploitation) gigapixel class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE system integrates an array of micro-cameras in a multiscale design to achieve gigapixel sampling at video rates. Alignment and optical testing of the micro-cameras is vital in compositing engines, which require pixel-level accurate mappings over the entire array of cameras. A testbed has been developed to automatically calibrate and measure the optical performance of the entire camera array. This testbed utilizes translation and rotation stages to project a ray into any micro-camera of the AWARE system. A spatial light modulator is projected through a telescope to form an arbitrary object space pattern at infinity. This collimated source is then reflected by an elevation stage mirror for pointing through the aperture of the objective into the micro-optics and eventually the detector of the micro-camera. Different targets can be projected with the spatial light modulator for measuring the modulation transfer function (MTF) of the system, fiducials in the overlap regions for registration and compositing, distortion mapping, illumination profiles, thermal stability, and focus calibration. The mathematics of the testbed mechanics are derived for finding the positions of the stages to achieve a particular incident angle into the camera, along with calibration steps for alignment of the camera and testbed coordinate axes. Measurement results for the AWARE-2 gigapixel camera are presented for MTF, focus calibration, illumination profile, fiducial mapping across the micro-camera for registration and distortion correction, thermal stability, and alignment of the camera on the testbed.

  9. A testbed for wide-field, high-resolution, gigapixel-class cameras.

    PubMed

    Kittle, David S; Marks, Daniel L; Son, Hui S; Kim, Jungsang; Brady, David J

    2013-05-01

    The high resolution and wide field of view (FOV) of the AWARE (Advanced Wide FOV Architectures for Image Reconstruction and Exploitation) gigapixel class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE system integrates an array of micro-cameras in a multiscale design to achieve gigapixel sampling at video rates. Alignment and optical testing of the micro-cameras is vital in compositing engines, which require pixel-level accurate mappings over the entire array of cameras. A testbed has been developed to automatically calibrate and measure the optical performance of the entire camera array. This testbed utilizes translation and rotation stages to project a ray into any micro-camera of the AWARE system. A spatial light modulator is projected through a telescope to form an arbitrary object space pattern at infinity. This collimated source is then reflected by an elevation stage mirror for pointing through the aperture of the objective into the micro-optics and eventually the detector of the micro-camera. Different targets can be projected with the spatial light modulator for measuring the modulation transfer function (MTF) of the system, fiducials in the overlap regions for registration and compositing, distortion mapping, illumination profiles, thermal stability, and focus calibration. The mathematics of the testbed mechanics are derived for finding the positions of the stages to achieve a particular incident angle into the camera, along with calibration steps for alignment of the camera and testbed coordinate axes. Measurement results for the AWARE-2 gigapixel camera are presented for MTF, focus calibration, illumination profile, fiducial mapping across the micro-camera for registration and distortion correction, thermal stability, and alignment of the camera on the testbed.

  10. Trajectory Generation and Path Planning for Autonomous Aerobots

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Kulczycki, Eric A.; Elfes, Alberto

    2007-01-01

    This paper presents global path planning algorithms for the Titan aerobot based on user defined waypoints in 2D and 3D space. The algorithms were implemented using information obtained through a planner user interface. The trajectory planning algorithms were designed to accurately represent the aerobot's characteristics, such as minimum turning radius. Additionally, trajectory planning techniques were implemented to allow for surveying of a planar area based solely on camera fields of view, airship altitude, and the location of the planar area's perimeter. The developed paths allow for planar navigation and three-dimensional path planning. These calculated trajectories are optimized to produce the shortest possible path while still remaining within realistic bounds of airship dynamics.

  11. Pose Estimation and Mapping Using Catadioptric Cameras with Spherical Mirrors

    NASA Astrophysics Data System (ADS)

    Ilizirov, Grigory; Filin, Sagi

    2016-06-01

    Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system's parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.

  12. A Path to Discovery

    ERIC Educational Resources Information Center

    Stegemoller, William; Stegemoller, Rebecca

    2004-01-01

    The path taken and the turns made as a turtle traces a polygon are examined to discover an important theorem in geometry. A unique tool, the Angle Adder, is implemented in the investigation. (Contains 9 figures.)

  13. Tortuous path chemical preconcentrator

    DOEpatents

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  14. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  15. The Clementine longwave infrared camera

    SciTech Connect

    Priest, R.E.; Lewis, I.T.; Sewall, N.R.; Park, H.S.; Shannon, M.J.; Ledebuhr, A.G.; Pleasance, L.D.; Massie, M.A.; Metschuleit, K.

    1995-04-01

    The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared regions. More than 1.7 million images of the moon, earth and space were returned from this mission. The longwave-infrared (LWIR) camera supplemented the UV/Visible and near-infrared mapping cameras providing limited strip coverage of the moon, giving insight to the thermal properties of the soils. This camera provided {approximately}100 m spatial resolution at 400 km periselene, and a 7 km across-track swath. This 2.1 kg camera using a 128 x 128 Mercury-Cadmium-Telluride (MCT) FPA viewed thermal emission of the lunar surface and lunar horizon in the 8.0 to 9.5 {micro}m wavelength region. A description of this light-weight, low power LWIR camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission`s primary objective for flight qualifying the sensors for future Department of Defense flights.

  16. Traditional gamma cameras are preferred.

    PubMed

    DePuey, E Gordon

    2016-08-01

    Although the new solid-state dedicated cardiac cameras provide excellent spatial and energy resolution and allow for markedly reduced SPECT acquisition times and/or injected radiopharmaceutical activity, they have some distinct disadvantages compared to traditional sodium iodide SPECT cameras. They are expensive. Attenuation correction is not available. Cardio-focused collimation, advantageous to increase depth-dependent resolution and myocardial count density, accentuates diaphragmatic attenuation and scatter from subdiaphragmatic structures. Although supplemental prone imaging is therefore routinely advised, many patients cannot tolerate it. Moreover, very large patients cannot be accommodated in the solid-state camera gantries. Since data are acquired simultaneously with an arc of solid-state detectors around the chest, no temporally dependent "rotating" projection images are obtained. Therefore, patient motion can be neither detected nor corrected. In contrast, traditional sodium iodide SPECT cameras provide rotating projection images to allow technologists and physicians to detect and correct patient motion and to accurately detect the position of soft tissue attenuators and to anticipate associated artifacts. Very large patients are easily accommodated. Low-dose x-ray attenuation correction is widely available. Also, relatively inexpensive low-count density software is provided by many vendors, allowing shorter SPECT acquisition times and reduced injected activity approaching that achievable with solid-state cameras.

  17. 94. ARAIV. Aerial view of ML1 area. Camera facing north. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    94. ARA-IV. Aerial view of ML-1 area. Camera facing north. Reactor test building is in center of view. Walking path and utility lines connect test building. Berm lies between. Road curves to left and continues to GCRE complex. Ineel photo no. 79-4707. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID

  18. KALI Camera: mid-infrared camera for the Keck Interferometer Nuller

    NASA Astrophysics Data System (ADS)

    Creech-Eakman, Michelle J.; Moore, James D.; Palmer, Dean L.; Serabyn, Eugene

    2003-03-01

    We present a brief overview of the KALI Camera, the mid-infrared camera for the Keck Interferometer Nulling Project, built at the Jet Propulsion Laboratory. The instrument utilizes mainly transmissive optics in four identical beam paths to spatially and spectrally filter, polarize, spectrally disperse and image the incoming 7-14 micron light from the four outputs of the Keck Nulling Beam Combiner onto a custom Boeing/DRS High Flux 128 X 128 BIB array. The electronics use a combination of JPL and Wallace Instruments boards to interface the array readout with the existing real-time control system of the Keck Interferometer. The cryogenic dewar, built by IR Laboratories, uses liquid nitrogen and liquid helium to cool the optics and the array, and includes six externally motorized mechanisms for aperture and pinhole control, focus, and optical component selection. The instrument will be assembled and tested through the summer of 2002, and is planned to be deployed as part of the Keck Interferometer Nulling experiment in 2003.

  19. System Architecture of the Dark Energy Survey Camera Readout Electronics

    SciTech Connect

    Shaw, Theresa; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Chappa, Steve; de Vicente, Juan; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; Martinez, Gustavo; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  20. [Image diagnostic of the retina with fundus cameras].

    PubMed

    Koschmieder, Ingo; Müller, Lothar

    2007-01-01

    Imaging of the retina of the human eye is an essential aid for medical diagnosis. The technical realization of photos of the ocular fundus is not trivial because of the optical properties of the eye. Established devices to obtain images are so called fundus cameras with digital documentation capabilities. New procedures do not need the use of pupils enlarging measures at the patient and work with infrared illumination. The quality of the diagnostic findings depends on the one hand fundamentally on the lay-out of the optical design of the fundus camera. On the other hand there are limitations caused by the eye itself which is always a part of the beam path. Both impacts define the attainable results. Special applications deal with the stereoscopic imaging of the retina or with spectral reflection characteristics.

  1. a Method for Self-Calibration in Satellite with High Precision of Space Linear Array Camera

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Qian, Fangming; Miao, Yuzhe; Wang, Rongjian

    2016-06-01

    At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera's change regulation can be mastered accurately and the camera's attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  2. Toward a miniaturized fundus camera.

    PubMed

    Gliss, Christine; Parel, Jean-Marie; Flynn, John T; Pratisto, Hans; Niederer, Peter

    2004-01-01

    Retinopathy of prematurity (ROP) describes a pathological development of the retina in prematurely born children. In order to prevent severe permanent damage to the eye and enable timely treatment, the fundus of the eye in such children has to be examined according to established procedures. For these examinations, our miniaturized fundus camera is intended to allow the acquisition of wide-angle digital pictures of the fundus for on-line or off-line diagnosis and documentation. We designed two prototypes of a miniaturized fundus camera, one with graded refractive index (GRIN)-based optics, the other with conventional optics. Two different modes of illumination were compared: transscleral and transpupillary. In both systems, the size and weight of the camera were minimized. The prototypes were tested on young rabbits. The experiments led to the conclusion that the combination of conventional optics with transpupillary illumination yields the best results in terms of overall image quality.

  3. Cameras for semiconductor process control

    NASA Technical Reports Server (NTRS)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  4. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  5. Dark Energy Camera for Blanco

    SciTech Connect

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  6. The GISMO-2 Bolometer Camera

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  7. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  8. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  9. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  10. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    PubMed Central

    Kristoffersen, Miklas S.; Dueholm, Jacob V.; Gade, Rikke; Moeslund, Thomas B.

    2016-01-01

    The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences. PMID:26742047

  11. Reliability improvement of low-cost camera for microsatellite

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Chen, Xinhua; Chen, Yuheng; Zhou, Wang; Shen, Weimin

    2009-07-01

    Remote sensing is one of the most defective means for environment monitor, resource management, national security and so on, but existing conventional satellites are too expensive for common users to afford. Microsatellites can reduce their cost and optimize their image products for specific applications. Space camera is one of their important payloads. The trade-off faced in a cost driven camera design is how to reduce cost while still have the required reliability. This paper introduces our path to develop reliable and low-cost space camera. The space camera has two main parts: optical system and camera circuits. Commercial off-the-shelf (COTS) lenses are difficult to maintain their imaging performance under space environment. Our designed optical system adopts catadioptric layout, so that its temperature sensitivity is low. The material and structure of camera lens can bear the vibration and shock during its launch. Its mechanical reliability is approved through mechanical test. A window made of synthetic fused silica is used to protect the lens and CCD sensor from space radiation. Optical system is completed with compact structure, wide temperature range, large relative aperture, high imaging quality and pass through the mechanical test, thermal cycling and vacuum thermal test. Modular concept is developed within the space camera circuit, which is composed of seven modules which are power supply unit, microcontroller unit, waveform generator unit, CCD unit, CCD signal processor unit, LVDS unit, and current surge restrain unit. Module concept and the use of plastic-encapsulated microcircuits (PEMs) components can simplify the design and the maintainability and can minimize size, mass, and power consumption. Through the destructive physical analysis (DPA), screening, and board level burn-in select the PEMs than can replace the hermetically sealed microcircuits(HSMs). Derating, redundancy, thermal dissipation, software error detection and so on are adopted in the

  12. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  13. A Compton camera for spectroscopic imaging from 100keV to 1MeV

    NASA Astrophysics Data System (ADS)

    Earnhart, Jonathan Raby Dewitt

    The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. Typical detector pair efficiencies were measured as 3 x 10-11 at 662keV. The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. Typical detector pair efficiencies were measured as 7 x 10-11 at 662keV. Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random

  14. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  15. Extending the depth of field through unbalanced optical path difference.

    PubMed

    Chu, Kaiqin; George, Nicholas; Chi, Wanli

    2008-12-20

    We describe a simple method to extend the depth of field of a conventional camera by inserting a transparent annular ring in front of the pupil of the lens. The insertion of the ring creates an unbalanced optical path difference across the lens aperture, which partitions the pupil and leads to an extended depth of field. This system is analyzed by diffraction and random process theory. Experiments are reported that are in good agreement with the theory.

  16. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  17. Full Stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2011-10-01

    Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.

  18. Stratoscope 2 integrating television camera

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, test and delivery of an integrating television camera for use as the primary data sensor on Flight 9 of Stratoscope 2 is described. The system block diagrams are presented along with the performance data, and definition of the interface of the telescope with the power, telemetry, and communication system.

  19. OSIRIS camera barrel optomechanical design

    NASA Astrophysics Data System (ADS)

    Farah, Alejandro; Tejada, Carlos; Gonzalez, Jesus; Cobos, Francisco J.; Sanchez, Beatriz; Fuentes, Javier; Ruiz, Elfego

    2004-09-01

    A Camera Barrel, located in the OSIRIS imager/spectrograph for the Gran Telescopio Canarias (GTC), is described in this article. The barrel design has been developed by the Institute for Astronomy of the University of Mexico (IA-UNAM), in collaboration with the Institute for Astrophysics of Canarias (IAC), Spain. The barrel is being manufactured by the Engineering Center for Industrial Development (CIDESI) at Queretaro, Mexico. The Camera Barrel includes a set of eight lenses (three doublets and two singlets), with their respective supports and cells, as well as two subsystems: the Focusing Unit, which is a mechanism that modifies the first doublet relative position; and the Passive Displacement Unit (PDU), which uses the third doublet as thermal compensator to maintain the camera focal length and image quality when the ambient temperature changes. This article includes a brief description of the scientific instrument; describes the design criteria related with performance justification; and summarizes the specifications related with misalignment errors and generated stresses. The Camera Barrel components are described and analytical calculations, FEA simulations and error budgets are also included.

  20. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  1. The Camera Comes to Court.

    ERIC Educational Resources Information Center

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  2. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  3. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  4. Directing Performers for the Cameras.

    ERIC Educational Resources Information Center

    Wilson, George P., Jr.

    An excellent way for an undergraduate, novice director of television and film to pick up background experience in directing performers for cameras is by participating in nonbroadcast-film activities, such as theatre, dance, and variety acts, both as performer and as director. This document describes the varieties of activities, including creative,…

  5. Toy Cameras and Color Photographs.

    ERIC Educational Resources Information Center

    Speight, Jerry

    1979-01-01

    The technique of using toy cameras for both black-and-white and color photography in the art class is described. The author suggests that expensive equipment can limit the growth of a beginning photographer by emphasizing technique and equipment instead of in-depth experience with composition fundamentals and ideas. (KC)

  6. Reconstructing the temporal progression of HIV-1 immune response pathways

    PubMed Central

    Jain, Siddhartha; Arrais, Joel; Venkatachari, Narasimhan J.; Ayyavoo, Velpandi; Bar-Joseph, Ziv

    2016-01-01

    Motivation: Most methods for reconstructing response networks from high throughput data generate static models which cannot distinguish between early and late response stages. Results: We present TimePath, a new method that integrates time series and static datasets to reconstruct dynamic models of host response to stimulus. TimePath uses an Integer Programming formulation to select a subset of pathways that, together, explain the observed dynamic responses. Applying TimePath to study human response to HIV-1 led to accurate reconstruction of several known regulatory and signaling pathways and to novel mechanistic insights. We experimentally validated several of TimePaths’ predictions highlighting the usefulness of temporal models. Availability and Implementation: Data, Supplementary text and the TimePath software are available from http://sb.cs.cmu.edu/timepath Contact: zivbj@cs.cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307624

  7. Kinect v2 and RGB Stereo Cameras Integration for Depth Map Enhancement

    NASA Astrophysics Data System (ADS)

    Ravanelli, R.; Nascetti, A.; Crespi, M.

    2016-06-01

    Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. For this purpose, a 3D model of a DUPLOTM bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.

  8. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  9. Slant path range gated imaging of static and moving targets

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas

    2012-06-01

    This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.

  10. Real object-based integral imaging system using a depth camera and a polygon model

    NASA Astrophysics Data System (ADS)

    Jeong, Ji-Seong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Lim, Byung-Muk; Jang, Ho-Wook; Kim, Nam; Yoo, Kwan-Hee

    2017-01-01

    An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.

  11. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  12. Performance simulation model for a new MWIR camera for missile plume detection

    NASA Astrophysics Data System (ADS)

    Yoon, Jeeyeon; Ryu, Dongok; Kim, Sangmin; Seong, Sehyun; Kim, Jieun; Kim, Sug-Whan; Yoon, Woongsup

    2013-10-01

    We report realistic performance simulation results for a new MWIR camera. It is designed for early detection of long distance missile plumes over few hundreds kilometer in the distance range. The camera design uses a number of refractive optical element and a IR detector. Both imaging and radiometric performance of the camera are investigated by using large scale ray tracing including targets and background scene models. Missile plume radiance was calculated from using CFD type radiative transfer algorithm and used as the light source for ray tracing computation. The atmospheric background was estimated using MODTRAN utilizing path thermal radiance, single/multiple scattered radiance and transmittance. The ray tracing simulation results demonstrate that the camera would satisfy the imaging and radiometric performance requirements in field operation at the target MWIR band.

  13. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  14. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis

    PubMed Central

    Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846

  15. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    PubMed

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  16. The ITER Radial Neutron Camera Detection System

    SciTech Connect

    Marocco, D.; Belli, F.; Esposito, B.; Petrizzi, L.; Riva, M.; Bonheure, G.; Kaschuck, Y.

    2008-03-12

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and n{sub t}/n{sub d} ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 10{sup 8}-10{sup 9} n/cm{sup 2} s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  17. 4π FOV compact Compton camera for nuclear material investigations

    NASA Astrophysics Data System (ADS)

    Lee, Wonho; Lee, Taewoong

    2011-10-01

    A compact Compton camera with a 4π field of view (FOV) was manufactured using the design parameters optimized with the effective choice of gamma-ray interaction order determined from a Monte Carlo simulation. The camera consisted of six CsI(Na) planar scintillators with a pixelized structure that was coupled to position sensitive photomultiplier tubes (H8500) consisting of multiple anodes connected to custom-made circuits. The size of the scintillator and each pixel was 4.4×4.4×0.5 and 0.2×0.2×0.5 cm, respectively. The total size of each detection module was only 5×5×6 cm and the distance between the detector modules was approximately 10 cm to maximize the camera performance, as calculated by the simulation. Therefore, the camera is quite portable for examining nuclear materials in areas, such as harbors or nuclear power plants. The non-uniformity of the multi-anode PMTs was corrected using a novel readout circuit. Amplitude information of the signals from the electronics attached to the scintillator-coupled multi-anode PMTs was collected using a data acquisition board (cDAQ-9178), and the timing information was sent to a FPGA (SPARTAN3E). The FPGA picked the rising edges of the timing signals, and compared the edges of the signals from six detection modules to select the coincident signal from a Compton pair only. The output of the FPGA triggered the DAQ board to send the effective Compton events to a computer. The Compton image was reconstructed, and the performance of the 4π FOV Compact camera was examined.

  18. Automatic Camera Orientation and Structure Recovery with Samantha

    NASA Astrophysics Data System (ADS)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  19. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  20. An Unplanned Path

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  1. Gas path seal

    NASA Technical Reports Server (NTRS)

    Bill, R. C.; Johnson, R. D. (Inventor)

    1979-01-01

    A gas path seal suitable for use with a turbine engine or compressor is described. A shroud wearable or abradable by the abrasion of the rotor blades of the turbine or compressor shrouds the rotor bades. A compliant backing surrounds the shroud. The backing is a yieldingly deformable porous material covered with a thin ductile layer. A mounting fixture surrounds the backing.

  2. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  3. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  4. The wide field/planetary camera

    NASA Technical Reports Server (NTRS)

    Westphal, J. A.; Baum, W. A.; Code, A. D.; Currie, D. G.; Danielson, G. E.; Gunn, J. E.; Kelsall, T. F.; Kristian, J. A.; Lynds, C. R.; Seidelmann, P. K.

    1982-01-01

    A wide site of potential astronomical and solar system scientific studies using the wide field planetary camera on space telescope are described. The expected performance of the camera as it approaches final assembly and testing is also detailed.

  5. Compact infrared cryogenic wafer-level camera: design and experimental validation.

    PubMed

    de la Barrière, Florence; Druart, Guillaume; Guérineau, Nicolas; Lasfargues, Gilles; Fendler, Manuel; Lhermet, Nicolas; Taboury, Jean

    2012-03-10

    We present a compact infrared cryogenic multichannel camera with a wide field of view equal to 120°. By merging the optics with the detector, the concept is compatible with both cryogenic constraints and wafer-level fabrication. The design strategy of such a camera is described, as well as its fabrication and integration process. Its characterization has been carried out in terms of the modulation transfer function and the noise equivalent temperature difference (NETD). The optical system is limited by the diffraction. By cooling the optics, we achieve a very low NETD equal to 15 mK compared with traditional infrared cameras. A postprocessing algorithm that aims at reconstructing a well-sampled image from the set of undersampled raw subimages produced by the camera is proposed and validated on experimental images.

  6. Imaging multi-energy gamma-ray fields with a Compton scatter camera

    NASA Astrophysics Data System (ADS)

    Martin, J. B.; Dogan, N.; Gormley, J. E.; Knoll, G. F.; O'Donnell, M.; Wehe, D. K.

    1994-08-01

    Multi-energy gamma-ray fields have been imaged with a ring Compton scatter camera (RCC). The RCC is intended for industrial applications, where there is a need to image multiple gamma-ray lines from spatially extended sources. To our knowledge, the ability of a Compton scatter camera to perform this task had not previously been demonstrated. Gamma rays with different incident energies are distinguished based on the total energy deposited in the camera elements. For multiple gamma-ray lines, separate images are generated for each line energy. Random coincidences and other interfering interactions have been investigated. Camera response has been characterized for energies from 0.511 to 2.75 MeV. Different gamma-ray lines from extended sources have been measured and images reconstructed using both direct and iterative algorithms.

  7. 3D point cloud registration based on the assistant camera and Harris-SIFT

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yu, HongYang

    2016-07-01

    3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.

  8. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  9. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-09-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The TSP is composed of experts in numerous technical fields related to this project and represents the interests of the public. The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms, environmental transport, environmental monitoring data, demographics, agriculture, food habits, environmental pathways and dose estimates. 3 figs.

  10. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  11. Flash photography by digital still camera

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yoshitaka

    2001-04-01

    Recently, the number of commercially produced digital still cameras has increases rapidly. However, detailed performance of digital still camera had not been evaluated. One of the purposes of this paper is to devise the method of evaluating the performance of a new camera. Another purpose is to show possibility of taking a picture of a scientific high quality photograph with a camera on the market, and taking a picture of a high-speed phenomenon.

  12. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  13. GRAVITY acquisition camera: characterization results

    NASA Astrophysics Data System (ADS)

    Anugu, Narsireddy; Garcia, Paulo; Amorim, Antonio; Wiezorrek, Erich; Wieprecht, Ekkehard; Eisenhauer, Frank; Ott, Thomas; Pfuhl, Oliver; Gordo, Paulo; Perrin, Guy; Brandner, Wolfgang; Straubmeier, Christian; Perraut, Karine

    2016-08-01

    GRAVITY acquisition camera implements four optical functions to track multiple beams of Very Large Telescope Interferometer (VLTI): a) pupil tracker: a 2×2 lenslet images four pupil reference lasers mounted on the spiders of telescope secondary mirror; b) field tracker: images science object; c) pupil imager: reimages telescope pupil; d) aberration tracker: images a Shack-Hartmann. The estimation of beam stabilization parameters from the acquisition camera detector image is carried out, for every 0.7 s, with a dedicated data reduction software. The measured parameters are used in: a) alignment of GRAVITY with the VLTI; b) active pupil and field stabilization; c) defocus correction and engineering purposes. The instrument is now successfully operational on-sky in closed loop. The relevant data reduction and on-sky characterization results are reported.

  14. Combustion pinhole-camera system

    DOEpatents

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  15. A 10-microm infrared camera.

    PubMed

    Arens, J F; Jernigan, J G; Peck, M C; Dobson, C A; Kilk, E; Lacy, J; Gaalema, S

    1987-09-15

    An IR camera has been built at the University of California at Berkeley for astronomical observations. The camera has been used primarily for high angular resolution imaging at mid-IR wavelengths. It has been tested at the University of Arizona 61- and 90-in. telescopes near Tucson and the NASA Infrared Telescope Facility on Mauna Kea, HI. In the observations the system has been used as an imager with interference coated and Fabry-Perot filters. These measurements have demonstrated a sensitivity consistent with photon shot noise, showing that the system is limited by the radiation from the telescope and atmosphere. Measurements of read noise, crosstalk, and hysteresis have been made in our laboratory.

  16. The OCA CCD Camera Controller

    DTIC Science & Technology

    1996-01-01

    blank) -2. REPORT DATE 3 . REPORT TYPE AND DATES COVERED •. . ..December 1996 , 1996 Final Report - Ř. TITLE AND SUBTITLE 5. FUNDING NUMBERS The OCA...Physical. implementation of a multi CCD camera Appendix 1: Contrbller schematics Appendix 2: Data sheets of the the major components Appendix 3 ...the final-report for EOARD cbntract ##SPC-93-4007. R? 3 %o-/ Ob. 7(, It contains the following sections: - Requirements analysis - Description of the

  17. The PS1 Gigapixel Camera

    NASA Astrophysics Data System (ADS)

    Tonry, John L.; Isani, S.; Onaka, P.

    2007-12-01

    The world's largest and most advanced digital camera has been installed on the Pan-STARRS-1 (PS1) telescope on Haleakala, Maui. Built at the University of Hawaii at Manoa's Institute for Astronomy (IfA) in Honolulu, the gigapixel camera will capture images that will be used to scan the skies for killer asteroids, and to create the most comprehensive catalog of stars and galaxies ever produced. The CCD sensors at the heart of the camera were developed in collaboration with Lincoln Laboratory of the Massachusetts Institute of Technology. The image area, which is about 40 cm across, contains 60 identical silicon chips, each of which contains 64 independent imaging circuits. Each of these imaging circuits contains approximately 600 x 600 pixels, for a total of about 1.4 gigapixels in the focal plane. The CCDs themselves employ the innovative technology called "orthogonal transfer." Splitting the image area into about 4,000 separate regions in this way has three advantages: data can be recorded more quickly, saturation of the image by a very bright star is confined to a small region, and any defects in the chips only affect only a small part of the image area. The CCD camera is controlled by an ultrafast 480-channel control system developed at the IfA. The individual CCD cells are grouped in 8 x 8 arrays on a single silicon chip called an orthogonal transfer array (OTA), which measures about 5 cm square. There are a total of 60 OTAs in the focal plane of each telescope.

  18. The Uses of a Polarimetric Camera

    DTIC Science & Technology

    2008-09-01

    18 Figure 18. Image of angle of polarization (From Bossa Nova Tech, 2007)......................20 Figure 19. The Salsa camera (From Bossa Nova Tech...22 Figure 21. Diagram of the inner workings of the SALSA camera (From: Bossa Nova Tech, 2007...23 Figure 22. Salsa camera with computer setup looking south toward California Pacific Highway 1

  19. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  20. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  1. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  2. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  3. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  4. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  5. Blind camera fingerprinting and image clustering.

    PubMed

    Bloy, Greg J

    2008-03-01

    Previous studies have shown how to "fingerprint" a digital camera given a set of images known to come from the camera. A clustering technique is proposed to construct such fingerprints from a mixed set of images, enabling identification of each image's source camera without any prior knowledge of source.

  6. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  7. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  8. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  9. Automatic tracking sensor camera system

    NASA Astrophysics Data System (ADS)

    Tsuda, Takao; Kato, Daiichiro; Ishikawa, Akio; Inoue, Seiki

    2001-04-01

    We are developing a sensor camera system for automatically tracking and determining the positions of subjects moving in three-dimensions. The system is intended to operate even within areas as large as soccer fields. The system measures the 3D coordinates of the object while driving the pan and tilt movements of camera heads, and the degree of zoom of the lenses. Its principal feature is that it automatically zooms in as the object moves farther away and out as the object moves closer. This maintains the area of the object as a fixed position of the image. This feature makes stable detection by the image processing possible. We are planning to use the system to detect the position of a soccer ball during a soccer game. In this paper, we describe the configuration of the developing automatic tracking sensor camera system. We then give an analysis of the movements of the ball within images of games, the results of experiments on method of image processing used to detect the ball, and the results of other experiments to verify the accuracy of an experimental system. These results show that the system is sufficiently accurate in terms of obtaining positions in three-dimensions.

  10. A calibration technology for multi-camera system with various focal lengths

    NASA Astrophysics Data System (ADS)

    Yang, Ruihua; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2016-01-01

    Calibration is the basis of three-dimensional (3D) reconstruction for machine vision technology. Nowadays, the most widely used calibration method among computer vision is the technique for binocular stereo measurement. However, binocular stereo vision has limited view field which is difficult to measure large-scale mechanical components synchronously. Thus, enlarging the view field is urgent in need for the large scale measurement. With the application of multi-camera system, the calibration for cameras with different focal lengths is required. In this paper, a method aiming at calibration problems for multi-camera system of different focal lengths is proposed. An imaging model for multi-camera system with various focal lengths is analyzed. The Harris corner detector is applied to determine the relationship between signal camera and checkerboard. Finally, the external parameters of different cameras can be obtained by the link with the checkerboard. The calibration results indicate that the calculation method used in this work can calibrate multi-camera with various focal lengths.

  11. Recursive least squares approach to calculate motion parameters for a moving camera

    NASA Astrophysics Data System (ADS)

    Chang, Samuel H.; Fuller, Joseph; Farsaie, Ali; Elkins, Les

    2003-10-01

    The increase in quality and the decrease in price of digital camera equipment have led to growing interest in reconstructing 3-dimensional objects from sequences of 2-dimensional images. The accuracy of the models obtained depends on two sets of parameter estimates. The first is the set of lens parameters - focal length, principal point, and distortion parameters. The second is the set of motion parameters that allows the comparison of a moving camera"s desired location to a theoretical location. In this paper, we address the latter problem, i.e. the estimation of the set of 3-D motion parameters from data obtained with a moving camera. We propose a method that uses Recursive Least Squares for camera motion parameter estimation with observation noise. We accomplish this by calculation of hidden information through camera projection and minimization of the estimation error. We then show how a filter based on the motion parameters estimates may be designed to correct for the errors in the camera motion. The validity of the approach is illustrated by the presentation of experimental results obtained using the methods described in the paper.

  12. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  13. New Reconstruction Accuracy Metric for 3D PIV

    NASA Astrophysics Data System (ADS)

    Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.

  14. Simulation and control of narcissus phenomenon using nonsequential ray tracing. I. Staring camera in 3-5 microm waveband.

    PubMed

    Akram, M Nadeem

    2010-02-20

    A nonsequential ray tracing technique is used to simulate the narcissus phenomenon in infrared (IR) imaging cameras having cooled detectors. Imaging cameras based on two-dimensional focal plane array detectors are simulated. In a companion article, line-scan imaging cameras based on one-dimensional linear detector arrays are simulated. Diffractive phase surfaces commonly used in modern IR cameras are modeled including multiple diffraction orders in the narcissus retroreflection path to correctly simulate the stray light return signal. Practical optical design examples along with their performance curves are given to elucidate the modeling technique. Optical methods to minimize the narcissus return signal are thoroughly explained, and modeling results are presented. It is shown that the nonsequential ray tracing technique is an effective method to accurately calculate the narcissus return signal in complex IR cameras having diffractive surfaces.

  15. Photogrammetric Reconstruction with Bayesian Information

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Vettore, A.

    2016-06-01

    Nowadays photogrammetry and laser scanning methods are the most wide spread surveying techniques. Laser scanning methods usually allow to obtain more accurate results with respect to photogrammetry, but their use have some issues, e.g. related to the high cost of the instrumentation and the typical need of high qualified personnel to acquire experimental data on the field. Differently, photogrammetric reconstruction can be achieved by means of low cost devices and by persons without specific training. Furthermore, the recent diffusion of smart devices (e.g. smartphones) embedded with imaging and positioning sensors (i.e. standard camera, GNSS receiver, inertial measurement unit) is opening the possibility of integrating more information in the photogrammetric reconstruction procedure, in order to increase its computational efficiency, its robustness and accuracy. In accordance with the above observations, this paper examines and validates new possibilities for the integration of information provided by the inertial measurement unit (IMU) into the photogrammetric reconstruction procedure, and, to be more specific, into the procedure for solving the feature matching and the bundle adjustment problems.

  16. Spectrometry with consumer-quality CMOS cameras.

    PubMed

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  17. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to see through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  18. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to 'see' through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  19. Entanglement by Path Identity.

    PubMed

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-24

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces-starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  20. Entanglement by Path Identity

    NASA Astrophysics Data System (ADS)

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-01

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces—starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  1. Nonadiabatic transition path sampling

    NASA Astrophysics Data System (ADS)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  2. Detection of Critical Camera Configurations for Structure from Motion

    NASA Astrophysics Data System (ADS)

    Michelini, M.; Mayer, H.

    2014-03-01

    This paper deals with the detection of critical, i.e., poor or degenerate camera configurations, with a poor or undefined intersection geometry between views. This is the basis for a calibrated Structure from Motion (SfM) approach employing image triplets for complex, unordered image sets, e.g., obtained by combining terrestrial images and images from small Unmanned Aerial Systems (UAS). Poor intersection geometry results from a small ratio between the baseline length and the depth of the scene. If there is no baseline between views, the intersection geometry becomes undefined. Our approach can detect image pairs without or with a very weak baseline (motion degeneracy). For the detection we have developed various metrics and evaluated them by means of extensive experiments with about 1500 image pairs. The metrics are based on properties of the reconstructed 3D points, such as the roundness of the error ellipsoid. The detection of weak baselines is formulated as a classification problem using the metrics as features. Machine learning techniques are applied to improve the classification. By taking into account the critical camera configurations during the iterative composition of the image set, a complete, metric 3D reconstruction of the whole scene could be achieved also in this case. We sketch our approach for the orientation of unordered image sets and finally demonstrate that the approach is able to produce very accurate and reliable orientations.

  3. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  4. PathMaster

    PubMed Central

    Mattie, Mark E.; Staib, Lawrence; Stratmann, Eric; Tagare, Hemant D.; Duncan, James; Miller, Perry L.

    2000-01-01

    Objective: Currently, when cytopathology images are archived, they are typically stored with a limited text-based description of their content. Such a description inherently fails to quantify the properties of an image and refers to an extremely small fraction of its information content. This paper describes a method for automatically indexing images of individual cells and their associated diagnoses by computationally derived cell descriptors. This methodology may serve to better index data contained in digital image databases, thereby enabling cytologists and pathologists to cross-reference cells of unknown etiology or nature. Design: The indexing method, implemented in a program called PathMaster, uses a series of computer-based feature extraction routines. Descriptors of individual cell characteristics generated by these routines are employed as indexes of cell morphology, texture, color, and spatial orientation. Measurements: The indexing fidelity of the program was tested after populating its database with images of 152 lymphocytes/lymphoma cells captured from lymph node touch preparations stained with hematoxylin and eosin. Images of “unknown” lymphoid cells, previously unprocessed, were then submitted for feature extraction and diagnostic cross-referencing analysis. Results: PathMaster listed the correct diagnosis as its first differential in 94 percent of recognition trials. In the remaining 6 percent of trials, PathMaster listed the correct diagnosis within the first three “differentials.” Conclusion: PathMaster is a pilot cell image indexing program/search engine that creates an indexed reference of images. Use of such a reference may provide assistance in the diagnostic/prognostic process by furnishing a prioritized list of possible identifications for a cell of uncertain etiology. PMID:10887168

  5. PATHS groundwater hydrologic model

    SciTech Connect

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  6. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  7. Characterization of the Series 1000 Camera System

    SciTech Connect

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  8. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  9. Research on evaluation method of CMOS camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoqiang; Han, Weiqiang; Cui, Lanfang

    2014-09-01

    In some professional image application fields, we need to test some key parameters of the CMOS camera and evaluate the performance of the device. Aiming at this requirement, this paper proposes a perfect test method to evaluate the CMOS camera. Considering that the CMOS camera has a big fixed pattern noise, the method proposes the `photon transfer curve method' based on pixels to measure the gain and the read noise of the camera. The advantage of this method is that it can effectively wipe out the error brought by the response nonlinearity. Then the reason of photoelectric response nonlinearity of CMOS camera is theoretically analyzed, and the calculation formula of CMOS camera response nonlinearity is deduced. Finally, we use the proposed test method to test the CMOS camera of 2560*2048 pixels. In addition, we analyze the validity and the feasibility of this method.

  10. AWiFS camera for Resourcesat

    NASA Astrophysics Data System (ADS)

    Dave, Himanshu; Dewan, Chirag; Paul, Sandip; Sarkar, S. S.; Pandya, Himanshu; Joshi, S. R.; Mishra, Ashish; Detroja, Manoj

    2006-12-01

    Remote sensors were developed and used extensively world over using aircraft and space platforms. India has developed and launched many sensors into space to survey natural resources. The AWiFS is one such Camera, launched onboard Resourcesat-1 satellite by ISRO in 2003. It is a medium resolution camera with 5-day revisit designed for studies related to forestry, vegetation, soil, snow and disaster warning. The camera provides 56m (nadir) resolution from 817 km altitude in three visible bands and one SWIR band. This paper deals with configuration features of AWiFS Camera of Resourcesat-1, its onboard performance and also the highlights of Camera being developed for Resourcesat-2. The AWiFS is realized with two identical cameras viz. AWiFS-A and AWiFS-B, which cover the large field of view of 48°. Each camera consists of independent collecting optics and associated 6000 element detectors and electronics catering to 4 bands. The visible bands use linear Silicon CCDs, with 10μ × 7μ element while SWIR band uses 13μ staggered InGaAs linear active pixels. Camera Electronics are custom designed for each detector based on detector and system requirements. The camera covers the total dynamic range up to 100% albedo with a single gain setting and 12-bit digitization of which 10 MSBs are transmitted. The Camera saturation radiance of each band can also be selected through telecommand. The Camera provides very high SNR of about 700 near saturation. The camera components are housed in specially designed Invar structures. The AWiFS Camera onboard Resourcesat-1 is providing excellent imageries and the data is routinely used world over. AWiFS for Resourcesat-2 is being developed with overall performance specifications remaining same. The Camera electronics is miniaturized with reductions in hardware packages, size and weight to one third.

  11. Design of a Compton camera for 3D prompt-γ imaging during ion beam therapy

    NASA Astrophysics Data System (ADS)

    Roellinghoff, F.; Richard, M.-H.; Chevallier, M.; Constanzo, J.; Dauvergne, D.; Freud, N.; Henriquet, P.; Le Foulher, F.; Létang, J. M.; Montarou, G.; Ray, C.; Testa, E.; Testa, M.; Walenta, A. H.

    2011-08-01

    We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The γ emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5×10-4 (reconstructable photons/emitted photons in 4π). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.

  12. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  13. 3D precision measurements of meter sized surfaces using low cost illumination and camera techniques

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; Daemi, Bita; Mattsson, Lars

    2017-04-01

    Using dedicated stereo camera systems and structured light is a well-known method for measuring the 3D shape of large surfaces. However the problem is not trivial when high accuracy, in the range of few tens of microns, is needed. Many error sources need to be handled carefully in order to obtain high quality results. In this study, we present a measurement method based on low-cost camera and illumination solutions combined with high-precision image analysis and a new approach in camera calibration and 3D reconstruction. The setup consists of two ordinary digital cameras and a Gobo projector as a structured light source. A matrix of dots is projected onto the target area. The two cameras capture the images of the projected pattern on the object. The images are processed by advanced subpixel resolution algorithms prior to the application of the 3D reconstruction technique. The strength of the method lays in a different approach for calibration, 3D reconstruction, and high-precision image analysis algorithms. Using a 10 mm pitch pattern of the light dots, the method is capable of reconstructing the 3D shape of surfaces. The precision (1σ repeatability) in the measurements is  <10 µm over a volume of 60  ×  50  ×  10 cm3 at a hardware cost of ~2% of available advanced measurement techniques. The expanded uncertainty (95% confidence level) is estimated to be 83 µm, with the largest uncertainty contribution coming from the absolute length of the metal ruler used as reference.

  14. Preliminary LSF and MTF determination for the stereo camera of the BepiColombo mission

    NASA Astrophysics Data System (ADS)

    Simioni, Emanuele; Da Deppo, Vania; Naletto, Giampiero; Borrelli, Donato; Dami, Michele; Ficai Veltroni, Iacopo; Tommasi, Leonardo; Cremonese, Gabriele

    2014-08-01

    In the context of a stereo-camera, measuring the image quality allows to define the accuracy of the 3D reconstruction. In fact, depending on the precision of the camera position data, on the kind of reconstruction algorithm, and on the adopted camera model, it determines the vertical accuracy of the reconstructed terrain model. Aim of this work is to describe the results and the method implemented to determine the Line Spread Function (LSF) of the Stereoscopic Channel (STC) of the SIMBIOSYS imaging system for the BepiColombo mission. BepiColombo is the cornerstone mission n.5 of the European Space Agency dedicated to the exploration of the innermost planet of the Solar System, Mercury, and it is expected to be launched in 2016. STC is a double push-frame single-detector camera composed by two identical sub-channels looking at ±21° wrt the nadir direction. STC has been designed so to have many optical elements common to both sub-channels. Also the image focal plane is common to the sub-channels and this permits the use of a single detector for the acquisition of the two images, i.e. one for each viewing direction. Considering the novelty of the design, conceived to sustain a harsh environment and to be as compact as possible, the STC unit is very complex. To obtain the most accurate 3D reconstruction of the Mercury surface, a camera model as precise as possible is needed, and an ad-hoc calibration set-up has been designed to calibrate the instrument both from the usual geometrical and radiometrical points of view and more specifically for the instrument stereo capability. In this context LSF estimation was performed with a new method applying a particular oversampling approach for the curve fitting to determine at first the entire calibration system transfer function and at the end the optical properties of the single instrument.

  15. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    NASA Astrophysics Data System (ADS)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  16. One frame subnanosecond spectroscopy camera

    NASA Astrophysics Data System (ADS)

    Silkis, E. G.; Titov, V. D.; Fel'Dman, G. G.; Zhilkina, V. M.; Petrokovich, O. A.; Syrtsev, V. N.

    1991-04-01

    The recording of ultraweak spectra is presently undertaken by a high-speed multichannel-spectrum camera (HSMSC) with a subnanosec-range time resolution in its photon-counting mode. This HSMSC's photodetector is a one-frame streak tube equipped with a grid shutter which is connected via fiber-optic contact to a linear CCD. The grain furnished by the streak tube on the basis of a microchannel plate is sufficiently high for recording single photoelectron signals. The HSMSC is compact and easy to handle.

  17. Digital laser scanning fundus camera.

    PubMed

    Plesch, A; Klingbeil, U; Bille, J

    1987-04-15

    Imaging and documentation of the human retina for clinical diagnostics are conventionally achieved by classical optical methods. We designed a digital laser scanning fundus camera. The optoelectronical instrument is based on scanning laser illumination of the retina and a modified video imaging procedure. It is coupled to a digital image buffer and a microcomputer for image storage and processing. Aside from its high sensitivity the LSF incorporates new ophthalmic imaging methods like polarization differential contrast. We give design considerations as well as a description of the instrument and its performance.

  18. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  19. Hanford Environmental Dose Reconstruction Project: Monthly Report

    SciTech Connect

    Finch, S.M.

    1990-07-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demographics, Agriculture, Food Habits, and Environmental Pathways and Dose Estimates. 3 figs.

  20. Proton computed tomography images with algebraic reconstruction

    NASA Astrophysics Data System (ADS)

    Bruzzi, M.; Civinini, C.; Scaringella, M.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Presti, D. Lo; Maccioni, G.; Pallotta, S.; Randazzo, N.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.

    2017-02-01

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to 1% and spatial resolutions <1 mm, achieved within processing times of 15‧ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  1. Optical boundary reconstruction of tokamak plasmas for feedback control of plasma position and shape

    NASA Astrophysics Data System (ADS)

    Hommen, G.; de Baar, M.; Nuij, P.; McArdle, G.; Akers, R.; Steinbuch, M.

    2010-11-01

    A new diagnostic is developed to reconstruct the plasma boundary using visible wavelength images. Exploiting the plasma's edge localized and toroidally symmetric emission profile, a new coordinate transform is presented to reconstruct the plasma boundary from a poloidal view image. The plasma boundary reconstruction is implemented in MATLAB and applied to camera images of Mega-Ampere Spherical Tokamak discharges. The optically reconstructed plasma boundaries are compared to magnetic reconstructions from the offline reconstruction code EFIT, showing very good qualitative and quantitative agreement. Average errors are within 2 cm and correlation is high. In the current software implementation, plasma boundary reconstruction from a single image takes 3 ms. The applicability and system requirements of the new optical boundary reconstruction, called OFIT, for use in both feedback control of plasma position and shape and in offline reconstruction tools are discussed.

  2. Optical boundary reconstruction of tokamak plasmas for feedback control of plasma position and shape.

    PubMed

    Hommen, G; de Baar, M; Nuij, P; McArdle, G; Akers, R; Steinbuch, M

    2010-11-01

    A new diagnostic is developed to reconstruct the plasma boundary using visible wavelength images. Exploiting the plasma's edge localized and toroidally symmetric emission profile, a new coordinate transform is presented to reconstruct the plasma boundary from a poloidal view image. The plasma boundary reconstruction is implemented in MATLAB and applied to camera images of Mega-Ampere Spherical Tokamak discharges. The optically reconstructed plasma boundaries are compared to magnetic reconstructions from the offline reconstruction code EFIT, showing very good qualitative and quantitative agreement. Average errors are within 2 cm and correlation is high. In the current software implementation, plasma boundary reconstruction from a single image takes 3 ms. The applicability and system requirements of the new optical boundary reconstruction, called OFIT, for use in both feedback control of plasma position and shape and in offline reconstruction tools are discussed.

  3. Optical boundary reconstruction of tokamak plasmas for feedback control of plasma position and shape

    SciTech Connect

    Hommen, G.; Baar, M. de; Nuij, P.; Steinbuch, M.; McArdle, G.; Akers, R.

    2010-11-15

    A new diagnostic is developed to reconstruct the plasma boundary using visible wavelength images. Exploiting the plasma's edge localized and toroidally symmetric emission profile, a new coordinate transform is presented to reconstruct the plasma boundary from a poloidal view image. The plasma boundary reconstruction is implemented in MATLAB and applied to camera images of Mega-Ampere Spherical Tokamak discharges. The optically reconstructed plasma boundaries are compared to magnetic reconstructions from the offline reconstruction code EFIT, showing very good qualitative and quantitative agreement. Average errors are within 2 cm and correlation is high. In the current software implementation, plasma boundary reconstruction from a single image takes 3 ms. The applicability and system requirements of the new optical boundary reconstruction, called OFIT, for use in both feedback control of plasma position and shape and in offline reconstruction tools are discussed.

  4. Observations of the Perseids 2012 using SPOSH cameras

    NASA Astrophysics Data System (ADS)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  5. Pinhole Camera For Viewing Electron Beam Materials Processing

    NASA Astrophysics Data System (ADS)

    Rushford, M. C.; Kuzmenko, P. J.

    1986-10-01

    A very rugged, compact (4x4x10 inches), gas purged "PINHOLE CAMERA" has been developed for viewing electron beam materials processing (e.g. melting or vaporizing metal). The video image is computer processed, providing dimensional and temperature measurements of objects within the field of view, using an IBM PC. The "pinhole camera" concept is similar to a TRW optics system for viewing into a coal combustor through a 2 mm hole. Gas is purged through the hole to repel particulates from optical surfaces. In our system light from the molten metal passes through the 2 mm hole "PINHOLE", reflects off an aluminum coated glass substrate and passes through a window into a vacuum tight container holding the camera and optics at atmospheric pressure. The mirror filters out X rays which pass through the AL layer and are absorbed in the glass mirror substrate. Since metallic coatings are usually reflective, the image quality is not severely degraded by small amounts of vapor that overcome the gas purge to reach the mirror. Coating thicknesses of up to 2 microns can be tolerated. The mirror is the only element needing occasional servicing. We used a telescope eyepiece as a convenient optical design, but with the traditional optical path reversed. The eyepiece images a scene through a small entrance aperture onto an image plane where a CCD camera is placed. Since the iris of the eyepiece is fixed and the scene intensity varies it was necessary to employ a variable neutral density filter for brightness control. Devices used for this purpose include PLZT light valve from Motorola, mechanically rotated linear polarizer sheets, and nematic liquid crystal light valves. These were placed after the mirror and entrance aperture but before the lens to operate as a voltage variable neutral density filter. The molten metal surface temp being viewed varies from 4000 to 1200 degrees Kelvin. The resultant intensity change (at 488 nm with 10 nm bandwidth) is seven orders of magnitude. This

  6. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  7. Light field panorama by a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Xue, Zhou; Baboulaz, Loic; Prandoni, Paolo; Vetterli, Martin

    2013-03-01

    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for fine and detailed light field acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light field stitching to increase the size of the acquired light field data. We consider a simplified plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light field acquisition and stitching under two different scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light fields should be resampled to increase the spatial range and ultimately obtain a wider field of view. In the case of camera translation and rotation, the camera motion is calculated such that the light fields can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light field applications such as registration and super-resolution.

  8. The Pan-STARRS Gigapixel Camera

    NASA Astrophysics Data System (ADS)

    Tonry, J.; Onaka, P.; Luppino, G.; Isani, S.

    The Pan-STARRS project will undertake repeated surveys of the sky to find "Killer Asteroids", everything else which moves or blinks, and to build an unprecedented deep and accurate "static sky". The key enabling technology is a new generation of large format cameras that offer an order of magnitude improvement in size, speed, and cost compared to existing instruments. In this talk, we provide an overview of the camera research and development effort being undertaken by the Institute for Astronomy Camera Group in partnership with MIT Lincoln Laboratories. The main components of the camera subsystem will be identified and briefly described as an introduction to the more specialized talks presented elsewhere at this conference. We will focus on the development process followed at the IfA utilizing the orthogonal transfer CCD in building cameras of various sizes from a single OTA "mcam", to a 16-OTA "Test Camera", to the final 64-OTA 1.4 billion pixel camera (Gigapixel Camera #1 or GPC1) to be used for PS1 survey operations. We also show the design of a deployable Shack-Hartmann device residing in the camera and other auxiliary instrumentation used to support camera operations.

  9. Computational cameras: convergence of optics and processing.

    PubMed

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  10. A Unifying Theory for Camera Calibration.

    PubMed

    Ramalingam, SriKumar; Sturm, Peter

    2016-07-19

    This paper proposes a unified theory for calibrating a wide variety of camera models such as pinhole, fisheye, cata-dioptric, and multi-camera networks. We model any camera as a set of image pixels and their associated camera rays in space. Every pixel measures the light traveling along a (half-) ray in 3-space, associated with that pixel. By this definition, calibration simply refers to the computation of the mapping between pixels and the associated 3D rays. Such a mapping can be computed using images of calibration grids, which are objects with known 3D geometry, taken from unknown positions. This general camera model allows to represent non-central cameras; we also consider two special subclasses, namely central and axial cameras. In a central camera, all rays intersect in a single point, whereas the rays are completely arbitrary in a non-central one. Axial cameras are an intermediate case: the camera rays intersect a single line. In this work, we show the theory for calibrating central, axial and non-central models using calibration grids, which can be either three-dimensional or planar.

  11. Optimising camera traps for monitoring small mammals.

    PubMed

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  12. Tracking hurricane paths

    NASA Technical Reports Server (NTRS)

    Prabhakaran, Nagarajan; Rishe, Naphtali; Athauda, Rukshan

    1997-01-01

    The South East coastal region experiences hurricane threat for almost six months in every year. To improve the accuracy of hurricane forecasts, meteorologists would need the storm paths of both the present and the past. A hurricane path can be established if we could identify the correct position of the storm at different times right from its birth to the end. We propose a method based on both spatial and temporal image correlations to locate the position of a storm from satellite images. During the hurricane season, the satellite images of the Atlantic ocean near the equator are examined for the hurricane presence. This is accomplished in two steps. In the first step, only segments with more than a particular value of cloud cover are selected for analysis. Next, we apply image processing algorithms to test the presence of a hurricane eye in the segment. If the eye is found, the coordinate of the eye is recorded along with the time stamp of the segment. If the eye is not found, we examine adjacent segments for the existence of hurricane eye. It is probable that more than one hurricane eye could be found from different segments of the same period. Hence, the above process is repeated till the entire potential area for hurricane birth is exhausted. The subsequent/previous position of each hurricane eye will be searched in the appropriate adjacent segments of the next/previous period to mark the hurricane path. The temporal coherence and spatial coherence of the images are taken into account by our scheme in determining the segments and the associated periods required for analysis.

  13. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decisionmaker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its content

  14. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decision maker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its

  15. The Zwicky Transient Facility Camera

    NASA Astrophysics Data System (ADS)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  16. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  17. JAVA PathFinder

    NASA Technical Reports Server (NTRS)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  18. Portage and Path Dependence*

    PubMed Central

    Bleakley, Hoyt; Lin, Jeffrey

    2012-01-01

    We examine portage sites in the U.S. South, Mid-Atlantic, and Midwest, including those on the fall line, a geomorphological feature in the southeastern U.S. marking the final rapids on rivers before the ocean. Historically, waterborne transport of goods required portage around the falls at these points, while some falls provided water power during early industrialization. These factors attracted commerce and manufacturing. Although these original advantages have long since been made obsolete, we document the continuing importance of these portage sites over time. We interpret these results as path dependence and contrast explanations based on sunk costs interacting with decreasing versus increasing returns to scale. PMID:23935217

  19. Reconstruction for proton computed tomography by tracing proton trajectories: a Monte Carlo study.

    PubMed

    Li, Tianfang; Liang, Zhengrong; Singanallur, Jayalakshmi V; Satogata, Todd J; Williams, David C; Schulte, Reinhard W

    2006-03-01

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm(-1)] to the curved CSP and MLP path estimates (5 lp cm(-1)). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.

  20. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    SciTech Connect

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W.

    2006-03-15

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.

  1. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  2. Multi-sensor 3D volumetric reconstruction using CUDA

    NASA Astrophysics Data System (ADS)

    Aliakbarpour, Hadi; Almeida, Luis; Menezes, Paulo; Dias, Jorge

    2011-12-01

    This paper presents a full-body volumetric reconstruction of a person in a scene using a sensor network, where some of them can be mobile. The sensor network is comprised of couples of camera and inertial sensor (IS). Taking advantage of IS, the 3D reconstruction is performed using no planar ground assumption. Moreover, IS in each couple is used to define a virtual camera whose image plane is horizontal and aligned with the earth cardinal directions. The IS is furthermore used to define a set of inertial planes in the scene. The image plane of each virtual camera is projected onto this set of parallel-horizontal inertial-planes, using some adapted homography functions. A parallel processing architecture is proposed in order to perform human real-time volumetric reconstruction. The real-time characteristic is obtained by implementing the reconstruction algorithm on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). In order to show the effectiveness of the proposed algorithm, a variety of the gestures of a person acting in the scene is reconstructed and demonstrated. Some analyses have been carried out to measure the performance of the algorithm in terms of processing time. The proposed framework has potential to be used by different applications such as smart-room, human behavior analysis and 3D teleconference. [Figure not available: see fulltext.

  3. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  4. Laboratory calibration and characterization of video cameras

    NASA Astrophysics Data System (ADS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-08-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  5. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  6. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  7. Hanford Environmental Dose Reconstruction Project monthly report

    SciTech Connect

    Finch, S.M.

    1991-10-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doeses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  8. Hanford Environmental Dose Reconstruction Project Monthly Report

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-03-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  9. Hanford Environmental Dose Reconstruction Project. Monthly report

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  10. Interface Reconstruction with Directional Walking

    SciTech Connect

    Yao, J

    2009-05-22

    Young's interface reconstruction with three-dimensional arbitrary mesh, in general, is rather tedious to implement compared to the case of a regular mesh. The main difficulty comes from the construction of a planar facet that bounds a certain volume inside a cell. Unlike the five basic configurations with a Cartesian mesh, there can be a great number of different configurations in the case of a general mesh. We represent a simple method that can derive the topology/geometry of the intersection of arbitrary planar objects in a uniform way. The method is based on a directional walking on the surface of objects, and links the intersection points with the paths of the walking naturally defining the intersection of objects. The method works in both two and three dimensions. The method does not take advantage of convexity, thus decomposition of an object is not necessary. Therefore, the solution with this method will have a reduced number of edges and less data storage, compared with methods that use shape decomposition. The treatment is general for arbitrary polyhedrons, and no look-up tables are needed. The same operation can easily be extended for curved geometry. The implementation of this new algorithm shall allow the interface reconstruction on an arbitrary mesh to be as simple as it is on a regular mesh. Furthermore, we exactly compute the integral of partial cell volume bounded by quadratic interface. Therefore, interface reconstruction with higher than second order accuracy can be achieved on an arbitrary mesh.

  11. Environment reconstruction for robot navigation

    NASA Astrophysics Data System (ADS)

    Bohn, Shawn J.; Thornton, Erin N.

    1994-07-01

    The United States Department of Energy is facing a large task in characterizing and remediating waste tanks and their contents. Because of the hazardous materials inside the waste tanks, all of the work must be done remotely. The purpose of this paper is to show how to reconstruct an enclosed environment from various scans of a Laser Range Finder. The reconstructed environment can then be used by a robot for path planning, and by an operator to monitor the progress of the waste remediation process. Environment reconstruction consists of two tasks: image processing and laser sculpting. The image processing task focuses first on reducing the quantity of low-confidence data and on smoothing random fluctuations in the data. Then the processed range data must be converted into an XYZ Cartesian coordinate space, a process for which we examined two methods. The first method is a geometrical transform of the LRF data. The second uses an artificial neural network to transform the data to XYZ coordinates. Once an XYZ data set is computed, laser sculpting can be performed. Laser sculpting employs a hierarchical tree structure formally called an octree. The octree structure allows efficient storage of volumetric data and the ability to fuse multiple data sets. Our research has allowed us to examine the difficulties of fusing multiple LRF scans into an octree and to develop algorithms for converting an octree structure into a representation of polygon surfaces.

  12. Environment reconstruction for robot navigation

    SciTech Connect

    Bohn, S.; Thornton, E.

    1994-04-01

    The United State Department of Energy (DOE) is facing a large task in characterizing and remediating waste tanks and their contents. Because of the hazardous materials inside the waste tanks, all of the work must be done remotely. The purpose of this paper is to show how to reconstruct an enclosed environment from various scans of a Laser Range Finder (LRF). The reconstructed environment can then be used by a robot for path planning, and by an operator to monitor the progress of the waste remediation process. Environment reconstruction consists of two tasks: image processing and laser sculpting. The image processing task focuses first on reducing the quantity of low-confidence data and on smoothing random fluctuations in the data. Then the processed range data must be converted into an XYZ Cartesian coordinate space, a process for which we examined two methods. The first method is a geometrical transform of the LRF data. The second uses an artificial neural network to transform the data to XYZ coordinates. Once an XYZ data set is computed, laser sculpting can be performed. Laser sculpting employs a hierarchical tree structure formally called an octree. The octree structure allows efficient storage of volumetric data and the ability to fuse multiple data sets. Our research has allowed us to examine the difficulties of fusing multiple LRF scans into an octree and to develop algorithms for converting an octree structure into a representation of polygon surfaces.

  13. Validation of a 2D multispectral camera: application to dermatology/cosmetology on a population covering five skin phototypes

    NASA Astrophysics Data System (ADS)

    Jolivot, Romuald; Nugroho, Hermawan; Vabres, Pierre; Ahmad Fadzil, M. H.; Marzani, Franck

    2011-07-01

    This paper presents the validation of a new multispectral camera specifically developed for dermatological application based on healthy participants from five different Skin PhotoTypes (SPT). The multispectral system provides images of the skin reflectance at different spectral bands, coupled with a neural network-based algorithm that reconstructs a hyperspectral cube of cutaneous data from a multispectral image. The flexibility of neural network based algorithm allows reconstruction at different wave ranges. The hyperspectral cube provides both high spectral and spatial information. The study population involves 150 healthy participants. The participants are classified based on their skin phototype according to the Fitzpatrick Scale and population covers five of the six types. The acquisition of a participant is performed at three body locations: two skin areas exposed to the sun (hand, face) and one area non exposed to the sun (lower back) and each is reconstructed at 3 different wave ranges. The validation is performed by comparing data acquired from a commercial spectrophotometer with the reconstructed spectrum obtained from averaging the hyperspectral cube. The comparison is calculated between 430 to 740 nm due to the limit of the spectrophotometer used. The results reveal that the multispectral camera is able to reconstruct hyperspectral cube with a goodness of fit coefficient superior to 0,997 for the average of all SPT for each location. The study reveals that the multispectral camera provides accurate reconstruction of hyperspectral cube which can be used for analysis of skin reflectance spectrum.

  14. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  15. Stationary Camera Aims And Zooms Electronically

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steven D.

    1994-01-01

    Microprocessors select, correct, and orient portions of hemispherical field of view. Video camera pans, tilts, zooms, and provides rotations of images of objects of field of view, all without moving parts. Used for surveillance in areas where movement of camera conspicuous or constrained by obstructions. Also used for closeup tracking of multiple objects in field of view or to break image into sectors for simultaneous viewing, thereby replacing several cameras.

  16. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  17. Measuring SO2 ship emissions with an ultra-violet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2013-11-01

    Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.

  18. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  19. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  20. Omnidirectional underwater camera design and calibration.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-03-12

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  1. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  2. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  3. Gesture recognition on smart cameras

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  4. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  5. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  6. Camera processing with chromatic aberration.

    PubMed

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  7. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  8. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  9. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models.

    PubMed

    Tervo, Christopher J; Reed, Jennifer L

    2016-05-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites.

  10. Preliminary results of technique for electron density profile reconstruction from weakly oblique sounding data

    NASA Astrophysics Data System (ADS)

    Kim, Anton G.; Kotovich, Galina V.

    2008-02-01

    In this work the technique for reconstruction of height profile of electron density N(h) from oblique sounding data was applied to weakly oblique sounding data. During the calculations it was supposed that height-frequency characteristics (HFC), obtained at the short path (the path length is ~126 km), is equal to distance-frequency characteristics (DFC), which can be recalculated into HFC of path mid-point. Recalculating of DFC into HFC was made according to modified Smith method in frames of spherically symmetric ionosphere without consideration of Earth's magnetic field. The profile N(h) was reconstructed from recalculated HFC according to Huang-Reinisch method, which is widely used in world digisonde network. Results of comparison between reconstructed N(h)-profiles with profiles obtained according to observations data of FMCW-ionosonde of ISTP, obtained at weakly oblique sounding path Usolie-Tory, and Digisonde DPS-4 in Irkutsk, near the path mid-point, are presented.

  11. 757 Path Loss Measurements

    NASA Technical Reports Server (NTRS)

    Horton, Kent; Huffman, Mitch; Eppic, Brian; White, Harrison

    2005-01-01

    Path Loss Measurements were obtained on three (3) GPS equipped 757 aircraft. Systems measured were Marker Beacon, LOC, VOR, VHF (3), Glide Slope, ATC (2), DME (2), TCAS, and GPS. This data will provide the basis for assessing the EMI (Electromagnetic Interference) safety margins of comm/nav (communication and navigation) systems to portable electronic device emissions. These Portable Electronic Devices (PEDs) include all devices operated in or around the aircraft by crews, passengers, servicing personnel, as well as the general public in the airport terminals. EMI assessment capability is an important step in determining if one system-wide PED EMI policy is appropriate. This data may also be used comparatively with theoretical analysis and computer modeling data sponsored by NASA Langley Research Center and others.

  12. Fundamental study on identification of CMOS cameras

    NASA Astrophysics Data System (ADS)

    Kurosawa, Kenji; Saitoh, Naoki

    2003-08-01

    In this study, we discussed individual camera identification of CMOS cameras, because CMOS (complementary-metal-oxide-semiconductor) imaging detectors have begun to make their move into the CCD (charge-coupled-device) fields for recent years. It can be identified whether or not the given images have been taken with the given CMOS camera by detecting the imager's intrinsic unique fixed pattern noise (FPN) just like the individual CCD camera identification method proposed by the authors. Both dark and bright pictures taken with the CMOS cameras can be identified by the method, because not only dark current in the photo detectors but also MOS-FET amplifiers incorporated in each pixel may produce pixel-to-pixel nonuniformity in sensitivity. Each pixel in CMOS detectors has the amplifier, which degrades image quality of bright images due to the nonuniformity of the amplifier gain. Two CMOS cameras were evaluated in our experiments. They were WebCamGoPlus (Creative), and EOS D30 (Canon). WebCamGoPlus is a low-priced web camera, whereas EOS D30 is for professional use. Image of a white plate were recorded with the cameras under the plate's luminance condition of 0cd/m2 and 150cd/m2. The recorded images were multiply integrated to reduce the random noise component. From the images of both cameras, characteristic dots patterns were observed. Some bright dots were observed in the dark images, whereas some dark dots were in the bright images. The results show that the camera identification method is also effective for CMOS cameras.

  13. Trajectory association across multiple airborne cameras.

    PubMed

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  14. Observations of the Perseids 2013 using SPOSH cameras

    NASA Astrophysics Data System (ADS)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  15. Single Image Camera Calibration in Close Range Photogrammetry for Solder Joint Analysis

    NASA Astrophysics Data System (ADS)

    Heinemann, D.; Knabner, S.; Baumgarten, D.

    2016-06-01

    Printed Circuit Boards (PCB) play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  16. Interactive cutting path analysis programs

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.; Williams, D. S.; Colley, S. R.

    1975-01-01

    The operation of numerically controlled machine tools is interactively simulated. Four programs were developed to graphically display the cutting paths for a Monarch lathe, Cintimatic mill, Strippit sheet metal punch, and the wiring path for a Standard wire wrap machine. These programs are run on a IMLAC PDS-ID graphic display system under the DOS-3 disk operating system. The cutting path analysis programs accept input via both paper tape and disk file.

  17. (Almost) Featureless Stereo: Calibration and Dense 3D Reconstruction Using Whole Image Operations

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Morris, R. D.; Maluf, D. A.; Cheeseman, P.

    2001-01-01

    The conventional approach to shape from stereo is via feature extraction and correspondences. This results in estimates of the camera parameters and a typically spare estimate of the surface. Given a set of calibrated images, a dense surface reconstruction is possible by minimizing the error between the observed image and the image rendered from the estimated surface with respect to the surface model parameters. Given an uncalibrated image and an estimated surface, the camera parameters can be estimated by minimizing the error between the observed and rendered images a function of the camera parameters. We use a very small dense set of matched features to provide camera parameter estimates for the initial dense surface estimate. We then re-estimate the camera parameters as described above, and then re-estimate the surface. This process is iterated. Whilst it can not be proven to converge, we have found that around three iterations results in excellent surface and camera parameters estimates.

  18. Camera self-calibration from translation by referring to a known camera.

    PubMed

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  19. Computational Techniques in Radio Neutrino Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Beydler, M.; ARA Collaboration

    2016-03-01

    The Askaryan Radio Array (ARA) is a high-energy cosmic neutrino detector constructed with stations of radio antennas buried in the ice at the South Pole. Event reconstruction relies on the analysis of the arrival times of the transient radio signals generated by neutrinos interacting within a few kilometers of the detector. Because of its depth dependence, the index of refraction in the ice complicates the interferometric directional reconstruction of possible neutrino events. Currently, there is an ongoing endeavor to enhance the programs used for the time-consuming computations of the curved paths of the transient wave signals in the ice as well as the interferometric beamforming. We have implemented a fast, multi-dimensional spline table lookup of the wave arrival times in order to enable raytrace-based directional reconstructions. Additionally, we have applied parallel computing across multiple Graphics Processing Units (GPUs) in order to perform the beamforming calculations quickly.

  20. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  1. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  2. Multi-Criteria Path Finding

    NASA Astrophysics Data System (ADS)

    Mohammadi, E.; Hunter, A.

    2012-07-01

    Path finding solutions are becoming a major part of many GIS applications including location based services and web-based GIS services. Most traditional path finding solutions are based on shortest path algorithms that tend to minimize the cost of travel from one point to another. These algorithms make use of some cost criteria that is usually an attribute of the edges in the graph network. Providing one shortest path limits user's flexibility when choosing a possible route, especially when more than one parameter is utilized to calculate cost (e.g., when length, number of traffic lights, and number of turns are used to calculate network cost.) K shortest path solutions tend to overcome this problem by providing second, third, and Kth shortest paths. These algorithms are efficient as long as the graphs edge weight does not change dynamically and no other parameters affect edge weights. In this paper we try to go beyond finding shortest paths based on some cost value, and provide all possible paths disregarding any parameter that may affect total cost. After finding all possible paths, we can rank the results by any parameter or combination of parameters, without a substantial increase in time complexity.

  3. An introduction to critical paths.

    PubMed

    Coffey, Richard J; Richards, Janet S; Remmert, Carl S; LeRoy, Sarah S; Schoville, Rhonda R; Baldwin, Phyllis J

    2005-01-01

    A critical path defines the optimal sequencing and timing of interventions by physicians, nurses, and other staff for a particular diagnosis or procedure. Critical paths are developed through collaborative efforts of physicians, nurses, pharmacists, and others to improve the quality and value of patient care. They are designed to minimize delays and resource utilization and to maximize quality of care. Critical paths have been shown to reduce variation in the care provided, facilitate expected outcomes, reduce delays, reduce length of stay, and improve cost-effectiveness. The approach and goals of critical paths are consistent with those of total quality management (TQM) and can be an important part of an organization's TQM process.

  4. Neuromagnetic source reconstruction

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.; Leahy, R.M.

    1994-12-31

    In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.

  5. Unbalanced quantized multiple description video transmission using path diversity

    NASA Astrophysics Data System (ADS)

    Ekmekci, Sila; Sikora, Thomas

    2003-05-01

    Multiple Description Coding is a forward error correction scheme where two or more descriptions of the source are sent to the receiver over different channels. If only one channel is received the signal can be reconstructed with distortion D1 or D2. On the other hand, if both channels rae received the combined information is used to achieve a lower distortion D0. Our approach is based on the Multiple State Video Coding with the novelty that we achieve a flexible unbalance rate of the two streams by varying the quantization step size while keeping the original frame rate constant. The total bitrate Rτ is fixed which is to be allocated between the two streams. If the assigned bitratres are not balanced there will be PSNR variations between neighboring frames after reconstruction. Our goal is to find the optimal rate allocation while maximizing the average reconstructed frame PSNR and minimizing the PSNR variations given the total bitrate Rτ and the packet loss probabilities p1 and p2 over the two paths. The reconstruction algorithm is also taken into account in the optimization process. The paper will report results presenting optimal system designs for balanced but also for unbalanced path conditions.

  6. New camera tube improves ultrasonic inspection system

    NASA Technical Reports Server (NTRS)

    Berger, H.; Collis, W. J.; Jacobs, J. E.

    1968-01-01

    Electron multiplier, incorporated into the camera tube of an ultrasonic imaging system, improves resolution, effectively shields low level circuits, and provides a high level signal input to the television camera. It is effective for inspection of metallic materials for bonds, voids, and homogeneity.

  7. Making a room-sized camera obscura

    NASA Astrophysics Data System (ADS)

    Flynt, Halima; Ruiz, Michael J.

    2015-01-01

    We describe how to convert a room into a camera obscura as a project for introductory geometrical optics. The view for our camera obscura is a busy street scene set against a beautiful mountain skyline. We include a short video with project instructions, ray diagrams and delightful moving images of cars driving on the road outside.

  8. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  9. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  10. Solid State Replacement of Rotating Mirror Cameras

    SciTech Connect

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  11. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  12. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  13. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  14. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  15. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  16. Controlled Impact Demonstration (CID) tail camera video

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Controlled Impact Demonstration (CID) was a joint research project by NASA and the FAA to test a survivable aircraft impact using a remotely piloted Boeing 720 aircraft. The tail camera movie is one shot running 27 seconds. It shows the impact from the perspective of a camera mounted high on the vertical stabilizer, looking forward over the fuselage and wings.

  17. AIM: Ames Imaging Module Spacecraft Camera

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  18. Creating and Using a Camera Obscura

    ERIC Educational Resources Information Center

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  19. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  20. Thermal Cameras in School Laboratory Activities

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…