Sample records for angle camera images

  1. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  2. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  3. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  4. A position and attitude vision measurement system for wind tunnel slender model

    NASA Astrophysics Data System (ADS)

    Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi

    2014-11-01

    A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.

  5. ARC-1990-AC79-7127

    NASA Image and Video Library

    1990-02-14

    Range : 4 billion miles from Earth, at 32 degrees to the ecliptic. P-36057C This color image of the Sun, Earth, and Venus is one of the first, and maybe, only images that show are solar system from such a vantage point. The image is a portion of a wide angle image containing the sun and the region of space where the Earth and Venus were at the time, with narrow angle cameras centered on each planet. The wide angle was taken with the cameras darkest filter, a methane absorption band, and the shortest possible exposure, one two-hundredth of a second, to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky, as seen from Voyager's perpective at the edge of the solar system. Yet, it is still 8xs brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics of the camera. The rays around th sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. the 2 narrow angle frames containing the images of the Earth and Venus have been digitally mosaicked into the wide angle image at the appropriate scale. These images were taken through three color filters and recombined to produce the color image. The violet, green, and blue filters used , as well as exposure times of .72,.48, and .72 for Earth, and .36, .24, and .36 for Venus.The images also show long linear streaks resulting from scatering of sulight off parts of the camera and its shade.

  6. Reconditioning of Cassini Narrow-Angle Camera

    NASA Image and Video Library

    2002-07-23

    These five images of single stars, taken at different times with the narrow-angle camera on NASA Cassini spacecraft, show the effects of haze collecting on the camera optics, then successful removal of the haze by warming treatments.

  7. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  8. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  9. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  10. A wide-angle camera module for disposable endoscopy

    NASA Astrophysics Data System (ADS)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  11. Non-contact measurement of rotation angle with solo camera

    NASA Astrophysics Data System (ADS)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  12. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  13. Investigation into the use of photoanthropometry in facial image comparison.

    PubMed

    Moreton, Reuben; Morley, Johanna

    2011-10-10

    Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  15. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  16. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  17. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  18. Multi-Angle View of the Canary Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  19. Voyager spacecraft images of Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Birnbaum, M. M.

    1982-01-01

    The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.

  20. Near-infrared light-guided miniaturized indirect ophthalmoscopy for nonmydriatic wide-field fundus photography.

    PubMed

    Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng

    2018-06-01

    A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.

  1. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  2. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  3. Colors of active regions on comet 67P

    NASA Astrophysics Data System (ADS)

    Oklay, N.; Vincent, J.-B.; Sierks, H.; Besse, S.; Fornasier, S.; Barucci, M. A.; Lara, L.; Scholten, F.; Preusker, F.; Lazzarin, M.; Pajola, M.; La Forgia, F.

    2015-10-01

    The OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) scientific imager (Keller et al. 2007) is successfully delivering images of comet 67P/Churyumov-Gerasimenko from its both wide angle camera (WAC) and narrow angle camera (NAC) since ESA's spacecraft Rosetta's arrival to the comet. Both cameras are equipped with filters covering the wavelength range of about 200 nm to 1000 nm. The comet nucleus is mapped with different combination of the filters in resolutions up to 15 cm/px. Besides the determination of the surface morphology in great details (Thomas et al. 2015), such high resolution images provided us a mean to unambiguously link some activity in the coma to a series of pits on the nucleus surface (Vincent et al. 2015).

  4. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Image and Video Library

    1996-09-13

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The "rays" around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics. http://photojournal.jpl.nasa.gov/catalog/PIA00450

  5. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The 'rays' around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics.

  6. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  7. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    NASA Astrophysics Data System (ADS)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  8. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  9. Two Perspectives on Forest Fire

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.

  10. LROC Stereo Observations

    NASA Astrophysics Data System (ADS)

    Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.

    2009-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.

  11. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  12. Solar System Portrait - 60 Frame Mosaic

    NASA Image and Video Library

    1996-09-13

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever portrait of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. http://photojournal.jpl.nasa.gov/catalog/PIA00451

  13. Solar System Portrait - 60 Frame Mosaic

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever 'portrait' of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun.

  14. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  15. Note: Simple hysteresis parameter inspector for camera module with liquid lens

    NASA Astrophysics Data System (ADS)

    Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung

    2010-05-01

    A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.

  16. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  17. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  18. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  19. Alpha and Omega

    NASA Image and Video Library

    2017-11-27

    These two images illustrate just how far Cassini traveled to get to Saturn. On the left is one of the earliest images Cassini took of the ringed planet, captured during the long voyage from the inner solar system. On the right is one of Cassini's final images of Saturn, showing the site where the spacecraft would enter the atmosphere on the following day. In the left image, taken in 2001, about six months after the spacecraft passed Jupiter for a gravity assist flyby, the best view of Saturn using the spacecraft's high-resolution (narrow-angle) camera was on the order of what could be seen using the Earth-orbiting Hubble Space Telescope. At the end of the mission (at right), from close to Saturn, even the lower resolution (wide-angle) camera could capture just a tiny part of the planet. The left image looks toward Saturn from 20 degrees below the ring plane and was taken on July 13, 2001 in wavelengths of infrared light centered at 727 nanometers using the Cassini spacecraft narrow-angle camera. The view at right is centered on a point 6 degrees north of the equator and was taken in visible light using the wide-angle camera on Sept. 14, 2017. The view on the left was acquired at a distance of approximately 317 million miles (510 million kilometers) from Saturn. Image scale is about 1,900 miles (3,100 kilometers) per pixel. The view at right was acquired at a distance of approximately 360,000 miles (579,000 kilometers) from Saturn. Image scale is 22 miles (35 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21353

  20. Modelling of the outburst on July 29th , 2015 observed with OSIRIS in the southern hemisphere of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Gicquel, Adeline; Vincent, Jean-Baptiste; Sierks, Holger; Rose, Martin; Agarwal, Jessica; Deller, Jakob; Guettler, Carsten; Hoefner, Sebastian; Hofmann, Marc; Hu, Xuanyu; Kovacs, Gabor; Oklay Vincent, Nilda; Shi, Xian; Tubiana, Cecilia; Barbieri, Cesare; Lamy, Phylippe; Rodrigo, Rafael; Koschny, Detlef; Rickman, Hans; OSIRIS Team

    2016-10-01

    Images of the nucleus and the coma (gas and dust) of comet 67P/Churyumov- Gerasimenko have been acquired by the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras system since March 2014 using both the wide angle camera (WAC) and the narrow angle camera (NAC). We are using the NAC camera to study the bright outburst observed on July 29th, 2015 in the southern hemisphere. The NAC camera's wavelength ranges between 250-1000 nm with a combination of 12 filters. The high spatial resolution is needed to localize the source point of the outburst on the surface of the nucleus. At the time of the observations, the heliocentric distance was 1.25AU and the distance between the spacecraft and the comet was 126 km. We aim to understand the physics leading to such outgassing: Is the jet associated to the outbursts controlled by the micro-topography? Or by ice suddenly exposed? We are using the Direct Simulation Monte Carlo (DSMC) method to study the gas flow close to the nucleus. The goal of the DSMC code is to reproduce the opening angle of the jet, and constrain the outgassing ratio between outburst source and local region. The results of this model will be compared to the images obtained with the NAC camera.

  1. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  2. Polarimetric Thermal Imaging

    DTIC Science & Technology

    2007-03-01

    front of a large area blackbody as background. The viewing angle , defined as the angle between surface normal and camera line of sight, was varied by...and polarization angle were derived from the Stokes parameters. The dependence of these polarization characteristics on viewing angle was investigated

  3. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    NASA Astrophysics Data System (ADS)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  4. Olympus Mons in Color

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Sections of MOC images P024_01 and P024_02, shown here in color composite form, were acquired with the low resolution red and blue wide angle cameras over a 5 minute period starting when Mars Global Surveyor was at its closest point to the planet at the beginning of its 24th orbit (around 4:00 AM PDT on October 20, 1997). To make this image, a third component (green) was synthesized from the red and blue images. During the imaging period, the camera was pointed straight down towards the martian surface, 176 km (109 miles) below the spacecraft. During the time it took to acquire the image, the spacecraft rose to an altitude of 310 km (193 miles). Owing to data camera scanning rate and data volume constraints, the image was acquired at a resolution of roughly 1 km (0.609 mile) per pixel. The image shown here covers an area from 12o to 26o N latitude and 126o N to 138o W longitude. The image is oriented with north to the top.

    As has been noted in other MOC releases, Olympus Mons is the largest of the major Tharsis volcanoes, rising 25 km (15.5 miles) and stretching over nearly 550 km (340 miles) east-west. The summit caldera, a composite of as many as seven roughly circular collapse depressions, is 66 by 83 km (41 by 52 miles) across. Also seen in this image are water-ice clouds that accumulate around and above the volcano during the late afternoon (at the time the image was acquired, the summit was at 5:30 PM local solar time). To understand the value of orbital observations, compare this image with the two taken during approach (PIA00929 and PIA00936), that are representative of the best resolution from Earth.

    Through Monday, October 28, the MOC had acquired a total of 132 images, most of which were at low sun elevation angles. Of these images, 74 were taken with the high resolution narrow angle camera and 58 with the low resolution wide angle cameras. Twenty-eight narrow angle and 24 wide angle images were taken after the suspension of aerobraking. These images, including the one shown above, are among the best returned so far.

    Launched on November 7, 1996, Mars Global Surveyor entered Mars orbit on Thursday, September 11, 1997. The original mission plan called for using friction with the planet's atmosphere to reduce the orbital energy, leading to a two-year mapping mission from close, circular orbit (beginning in March 1998). Owing to difficulties with one of the two solar panels, aerobraking was suspended in mid-October and is scheduled to resume in mid-November. Many of the original objectives of the mission, and in particular those of the camera, are likely to be accomplished as the mission progresses.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  5. Esthetic smile preferences and the orientation of the maxillary occlusal plane.

    PubMed

    Kattadiyil, Mathew T; Goodacre, Charles J; Naylor, W Patrick; Maveli, Thomas C

    2012-12-01

    The anteroposterior orientation of the maxillary occlusal plane has an important role in the creation, assessment, and perception of an esthetic smile. However, the effect of the angle at which this plane is visualized (the viewing angle) in a broad smile has not been quantified. The purpose of this study was to assess the esthetic preferences of dental professionals and nondentists by using 3 viewing angles of the anteroposterior orientation of the maxillary occlusal plane. After Institutional Review Board approval, standardized digital photographic images of the smiles of 100 participants were recorded by simultaneously triggering 3 cameras set at different viewing angles. The top camera was positioned 10 degrees above the occlusal plane (camera #1, Top view); the center camera was positioned at the level of the occlusal plane (camera #2, Center view); and the bottom camera was located 10 degrees below the occlusal plane (camera #3, Bottom view). Forty-two dental professionals and 31 nondentists (persons from the general population) independently evaluated digital images of each participant's smile captured from the Top view, Center view, and Bottom view. The 73 evaluators were asked individually through a questionnaire to rank the 3 photographic images of each patient as 'most pleasing,' 'somewhat pleasing,' or 'least pleasing,' with most pleasing being the most esthetic view and the preferred orientation of the occlusal plane. The resulting esthetic preferences were statistically analyzed by using the Friedman test. In addition, the participants were asked to rank their own images from the 3 viewing angles as 'most pleasing,' 'somewhat pleasing,' and 'least pleasing.' The 73 evaluators found statistically significant differences in the esthetic preferences between the Top and Bottom views and between the Center and Bottom views (P<.001). No significant differences were found between the Top and Center views. The Top position was marginally preferred over the Center, and both were significantly preferred over the Bottom position. When the participants evaluated their own smiles, a significantly greater number (P< .001) preferred the Top view over the Center or the Bottom views. No significant differences were found in preferences based on the demographics of the evaluators when comparing age, education, gender, profession, and race. The esthetic preference for the maxillary occlusal plane was influenced by the viewing angle with the higher (Top) and center views preferred by both dental and nondental evaluators. The participants themselves preferred the higher view of their smile significantly more often than the center or lower angle views (P<.001). Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  6. The Wide Angle Camera of the ROSETTA Mission

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Verani, S.; Bertini, I.; Lazzarin, M.; Rampazzi, F.; Cremonese, G.; Ragazzoni, R.; Marzari, F.; Angrilli, F.; Bianchini, G. A.; Debei, S.; Dececco, M.; Guizzo, G.; Parzianello, G.; Ramous, P.; Saggin, B.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.; Nicolosi, G.; Pelizzo, M. G.; Tondello, G.; Brunello, P.; Peron, F.

    This paper aims to give a brief description of the Wide Angle Camera (WAC), built by the Centro Servizi e AttivitàSpaziali (CISAS) of the University of Padova for the ESA ROSETTA Mission to comet 46P/Wirtanen and asteroids 4979 Otawara and 140 Siwa. The WAC is part of the OSIRIS imaging system, which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front cover mechanism for the NAC. The flight model of the WAC was delivered in December 2001, and has been already integrated on ROSETTA.

  7. Evaluation of a novel collimator for molecular breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon

    Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less

  8. Evaluation of a novel collimator for molecular breast tomosynthesis.

    PubMed

    Gilland, David R; Welch, Benjamin L; Lee, Seungjoon; Kross, Brian; Weisenberger, Andrew G

    2017-11-01

    This study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelated (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (-25° to 25°) using 99m Tc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging. © 2017 American Association of Physicists in Medicine.

  9. Evaluation of a novel collimator for molecular breast tomosynthesis

    DOE PAGES

    Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon; ...

    2017-09-06

    Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less

  10. Miranda

    NASA Image and Video Library

    1999-08-24

    One wide-angle and eight narrow-angle camera images of Miranda, taken by NASA Voyager 2, were combined in this view. The controlled mosaic was transformed to an orthographic view centered on the south pole.

  11. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  12. Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera

    PubMed Central

    Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing

    2018-01-01

    The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885

  13. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    DOE PAGES

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...

    2016-11-28

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less

  14. Wild 2 Close Look

    NASA Image and Video Library

    2004-06-17

    This image shows the comet Wild 2, which NASA's Stardust spacecraft flew by on Jan. 2, 2004. This image is the closest short exposure of the comet, taken at an11.4-degree phase angle, the angle between the camera, comet and the Sun. http://photojournal.jpl.nasa.gov/catalog/PIA06285

  15. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  16. Single exposure three-dimensional imaging of dusty plasma clusters.

    PubMed

    Hartmann, Peter; Donkó, István; Donkó, Zoltán

    2013-02-01

    We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.

  17. A Fractured Pole

    NASA Image and Video Library

    2015-10-15

    NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A companion view from the wide-angle camera (PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but high-resolution Cassini images show a landscape of stark contrasts. Thin cracks cross over the pole -- the northernmost extent of a global system of such fractures. Before this Cassini flyby, scientists did not know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 4,000 miles (6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19660

  18. Contact Angle Measurements Using a Simplified Experimental Setup

    ERIC Educational Resources Information Center

    Lamour, Guillaume; Hamraoui, Ahmed; Buvailo, Andrii; Xing, Yangjun; Keuleyan, Sean; Prakash, Vivek; Eftekhari-Bafrooei, Ali; Borguet, Eric

    2010-01-01

    A basic and affordable experimental apparatus is described that measures the static contact angle of a liquid drop in contact with a solid. The image of the drop is made with a simple digital camera by taking a picture that is magnified by an optical lens. The profile of the drop is then processed with ImageJ free software. The ImageJ contact…

  19. Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array

    NASA Astrophysics Data System (ADS)

    Houben, Sebastian

    2015-03-01

    The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.

  20. Apollo 8 Mission image,Farside of Moon

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Farside of Moon. Image taken on Revolution 4. Camera Tilt Mode: Vertical Stereo. Sun Angle: 13. Original Film Magazine was labeled D. Camera Data: 70mm Hasselblad. Lens: 80mm; F-Stop: F/2.8; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Flight Date: December 21-27,1968.

  1. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  2. Yugoslavia

    Atmospheric Science Data Center

    2013-04-17

    ... Image These Multi-angle Imaging SpectroRadiometer (MISR) nadir camera images of Yugoslavia were acquired on July 28, 2000 during ... typically bright as a result of reflection from the plants' cell walls, to the brightness in the red. In the middle "false color" image, ...

  3. Glare on the Window

    NASA Image and Video Library

    2018-03-05

    In this image, NASA's Cassini sees Saturn and its rings through a haze of Sun glare on the camera lens. If you could travel to Saturn in person and look out the window of your spacecraft when the Sun was at a certain angle, you might see a view very similar to this one. Images taken using red, green and blue spectral filters were combined to show the scene in natural color. The images were taken with Cassini's wide-angle camera on June 23, 2013, at a distance of approximately 491,200 miles (790,500 kilometers) from Saturn. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA17185

  4. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  5. Preliminary calibration results of the wide angle camera of the imaging instrument OSIRIS for the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, V.; Naletto, G.; Nicolosi, P.; Zambolin, P.; De Cecco, M.; Debei, S.; Parzianello, G.; Ramous, P.; Zaccariotto, M.; Fornasier, S.; Verani, S.; Thomas, N.; Barthol, P.; Hviid, S. F.; Sebastian, I.; Meller, R.; Sierks, H.; Keller, H. U.; Barbieri, C.; Angrilli, F.; Lamy, P.; Rodrigo, R.; Rickman, H.; Wenzel, K. P.

    2017-11-01

    Rosetta is one of the cornerstone missions of the European Space Agency for having a rendezvous with the comet 67P/Churyumov-Gerasimenko in 2014. The imaging instrument on board the satellite is OSIRIS (Optical, Spectroscopic and Infrared Remote Imaging System), a cooperation among several European institutes, which consists of two cameras: a Narrow (NAC) and a Wide Angle Camera (WAC). The WAC optical design is an innovative one: it adopts an all reflecting, unvignetted and unobstructed two mirror configuration which allows to cover a 12° × 12° field of view with an F/5.6 aperture and gives a nominal contrast ratio of about 10-4. The flight model of this camera has been successfully integrated and tested in our laboratories, and finally has been integrated on the satellite which is now waiting to be launched in February 2004. In this paper we are going to describe the optical characteristics of the camera, and to summarize the results so far obtained with the preliminary calibration data. The analysis of the optical performance of this model shows a good agreement between theoretical performance and experimental results.

  6. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  7. First Results from the Wide Angle Camera of the ROSETTA Mission .

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Bertini, I.; Angrilli, F.; Bianchini, G. A.; Debei, S.; De Cecco, M.; Parzianello, G.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.

    This paper gives a brief description of the Wide Angle Camera (WAC), built by the Center of Studies and Activities for Space (CISAS) of the University of Padova for the ESA ROSETTA Mission, of data we have obtained about the new mission targets, and of the first results achieved after the launch in March 2004. The WAC is part of the OSIRIS imaging system, built under the PI-ship of Dr. U. Keller (Max-Planck-Institute for Solar System Studies) which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front door mechanism for the NAC. The images show the excellent optical quality of the WAC, exceeding the specifications both in term of encircled energy (80% in one pixel over a FoV of 12×12 sq degree), limiting magnitude (fainter than the 13th in 30s exposure time through a wideband red filter) and amount of distortions.

  8. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  9. Retrieving Atmospheric Dust Loading on Mars Using Engineering Cameras and MSL's Mars Hand Lens Imager (MAHLI)

    NASA Astrophysics Data System (ADS)

    Wolfe, C. A.; Lemmon, M. T.

    2015-12-01

    Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera equipped with a neutral density filter. Direct images of the Sun not only provide the ability to measure extinction by dust and ice in the atmosphere, but also provide a variety of constraints on the Martian dust and water cycles. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as the engineering cameras onboard Opportunity and the Mars Hand Lens Imager (MAHLI) on Curiosity. Our investigation focuses primarily on the accuracy of a method that determines optical depth values using scattering models that implement the ratio of sky radiance measurements at different elevation angles, but at the same scattering angle. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on the comparison of direct extinction measurements from archival Navcam, Hazcam, and MAHLI camera data.

  10. Russian Arctic

    Atmospheric Science Data Center

    2013-04-16

    ... faint greenish hue in the multi-angle composite. This subtle effect suggests that the nadir camera is observing more of the brighter ... energy and water at the Earth's surface, and for preserving biodiversity. The Multi-angle Imaging SpectroRadiometer observes the daylit ...

  11. Snowstorm Along the China-Mongolia-Russia Borders

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Heavy snowfall on March 12, 2004, across north China's Inner Mongolia Autonomous Region, Mongolia and Russia, caused train and highway traffic to stop for several days along the Russia-China border. This pair of images from the Multi-angle Imaging SpectroRadiometer (MISR) highlights the snow and surface properties across the region on March 13. The left-hand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The right-hand image is a multi-angle false-color view made from the red band data of the 46-degree aftward camera, the nadir camera, and the 46-degree forward camera.

    About midway between the frozen expanse of China's Hulun Nur Lake (along the right-hand edge of the images) and Russia's Torey Lakes (above image center) is a dark linear feature that corresponds with the China-Mongolia border. In the upper portion of the images, many small plumes of black smoke rise from coal and wood fires and blow toward the southeast over the frozen lakes and snow-covered grasslands. Along the upper left-hand portion of the images, in Russia's Yablonovyy mountain range and the Onon River Valley, the terrain becomes more hilly and forested. In the nadir image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the multi-angle composite, open-canopy forested areas are indicated by green hues. Since this is a multi-angle composite, the green color arises not from the color of the leaves but from the architecture of the surface cover. The green areas appear brighter at the nadir angle than at the oblique angles because more of the snow-covered surface in the gaps between the trees is visible. Color variations in the multi-angle composite also indicate angular reflectance properties for areas covered by snow and ice. The light blue color of the frozen lakes is due to the increased forward scattering of smooth ice, and light orange colors indicate rougher ice or snow, which scatters more light in the backward direction.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire Earth between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 22525. The panels cover an area of about 355 kilometers x 380 kilometers, and utilize data from blocks 50 to 52 within World Reference System-2 path 126.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  12. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  13. The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity

    NASA Astrophysics Data System (ADS)

    Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason

    2016-07-01

    NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km-wide swath, generating DTMs with 32-m spatial scale and 4-m vertical precision. These data also support characterization of surface clutter for interpretation of radar deep and shallow sounding modes. Detectors: The cameras have identical rapid-readout, radiation-hard 4k x 2k CMOS detectors and can image in both pushbroom and framing modes. Color observations are acquired by pushbroom imaging using six broadband filters (~300-1050 nm), allowing mapping of surface units for correlation with geologic structures, topography, and compositional units from other instruments.

  14. Saturnian Snowman

    NASA Image and Video Library

    2015-10-15

    NASA's Cassini spacecraft spied this tight trio of craters as it approached Saturn's icy moon Enceladus for a close flyby on Oct. 14, 2015. The craters, located at high northern latitudes, are sliced through by thin fractures -- part of a network of similar cracks that wrap around the snow-white moon. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 at a distance of approximately 6,000 miles (10,000 kilometers) from Enceladus. Image scale is 197 feet (60 meters) per pixel. The image was taken with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 using a spectral filter which preferentially admits wavelengths of ultraviolet light centered at 338 nanometers. http://photojournal.jpl.nasa.gov/catalog/PIA20011

  15. Rosetta/OSIRIS - Nucleus morphology and activity of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Sierks, Holger; Barbieri, Cesare; Lamy, Philippe; Rickman, Hans; Rodrigo, Rafael; Koschny, Detlef

    2015-04-01

    ESA's Rosetta mission arrived on August 6, 2014, at target comet 67P/Churyumov-Gerasimenko after 10 years of cruise. OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) is the scientific imaging system onboard Rosetta. It comprises a Narrow Angle Camera (NAC) for nucleus surface and dust studies and a Wide Angle Camera (WAC) for the wide field coma investigations. OSIRIS imaged the nucleus and coma of the comet from the arrival throughout the mapping phase, PHILAE landing, early escort phase and close fly-by. The overview paper will discuss the surface morpholo-gy and activity of the nucleus as seen in gas, dust, and local jets as well as small scale structures in the local topography.

  16. First NAC Image Obtained in Mercury Orbit

    NASA Image and Video Library

    2017-12-08

    NASA image acquired: March 29, 2011 This is the first image of Mercury taken from orbit with MESSENGER’s Narrow Angle Camera (NAC). MESSENGER’s camera system, the Mercury Dual Imaging System (MDIS), has two cameras: the Narrow Angle Camera and the Wide Angle Camera (WAC). Comparison of this image with MESSENGER’s first WAC image of the same region shows the substantial difference between the fields of view of the two cameras. At 1.5°, the field of view of the NAC is seven times smaller than the 10.5° field of view of the WAC. This image was taken using MDIS’s pivot. MDIS is mounted on a pivoting platform and is the only instrument in MESSENGER’s payload capable of movement independent of the spacecraft. The other instruments are fixed in place, and most point down the spacecraft’s boresight at all times, relying solely on the guidance and control system for pointing. The 90° range of motion of the pivot gives MDIS a much-needed extra degree of freedom, allowing MDIS to image the planet’s surface at times when spacecraft geometry would normally prevent it from doing so. The pivot also gives MDIS additional imaging opportunities by allowing it to view more of the surface than that at which the boresight-aligned instruments are pointed at any given time. On March 17, 2011 (March 18, 2011, UTC), MESSENGER became the first spacecraft ever to orbit the planet Mercury. The mission is currently in the commissioning phase, during which spacecraft and instrument performance are verified through a series of specially designed checkout activities. In the course of the one-year primary mission, the spacecraft's seven scientific instruments and radio science investigation will unravel the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the science questions that the MESSENGER mission has set out to answer. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  17. Oblique View of Victoria Crater

    NASA Image and Video Library

    2009-08-12

    This image of Victoria Crater in the Meridiani Planum region of Mars was taken by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter at more of a sideways angle than earlier orbital images of this crater.

  18. Reconditioning of Cassini Narrow-Angle Camera

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These five images of single stars, taken at different times with the narrow-angle camera on NASA's Cassini spacecraft, show the effects of haze collecting on the camera's optics, then successful removal of the haze by warming treatments.

    The image on the left was taken on May 25, 2001, before the haze problem occurred. It shows a star named HD339457.

    The second image from left, taken May 30, 2001, shows the effect of haze that collected on the optics when the camera cooled back down after a routine-maintenance heating to 30 degrees Celsius (86 degrees Fahrenheit). The star is Maia, one of the Pleiades.

    The third image was taken on October 26, 2001, after a weeklong decontamination treatment at minus 7 C (19 F). The star is Spica.

    The fourth image was taken of Spica January 30, 2002, after a weeklong decontamination treatment at 4 C (39 F).

    The final image, also of Spica, was taken July 9, 2002, following three additional decontamination treatments at 4 C (39 F) for two months, one month, then another month.

    Cassini, on its way toward arrival at Saturn in 2004, is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.

  19. Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr

    2008-01-01

    The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.

  20. Schiaparelli Crater Rim and Interior Deposits

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A portion of the rim and interior of the large impact crater Schiaparelli is seen at different resolutions in images acquired October 18, 1997 by the Mars Global Surveyor Orbiter Camera (MOC) and by the Viking Orbiter 1 twenty years earlier. The left image is a MOC wide angle camera 'context' image showing much of the eastern portion of the crater at roughly 1 km (0.6 mi) per picture element. The image is about 390 by 730 km (240 X 450 miles). Shown within the wide angle image is the outline of a portion of the best Viking image (center, 371S53), acquired at a resolution of about 240 m/pixel (790 feet). The area covered is 144 X 144 km (89 X 89 miles). The right image is the high resolution narrow angle camera view. The area covered is very small--3.9 X 10.2 km (2.4 X 6.33 mi)--but is seen at 63 times higher resolution than the Viking image. The subdued relief and bright surface are attributed to blanketing by dust; many small craters have been completely filled in, and only the most recent (and very small) craters appear sharp and bowl-shaped. Some of the small craters are only 10-12 m (30-35 feet) across. Occasional dark streaks on steeper slopes are small debris slides that have probably occurred in the past few decades. The two prominent, narrow ridges in the center of the image may be related to the adjustment of the crater floor to age or the weight of the material filling the basin.

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  1. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  2. Staring at Saturn

    NASA Image and Video Library

    2016-09-15

    NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047

  3. A state observer for using a slow camera as a sensor for fast control applications

    NASA Astrophysics Data System (ADS)

    Gahleitner, Reinhard; Schagerl, Martin

    2013-03-01

    This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.

  4. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  5. MESSENGER Departs Mercury

    NASA Image and Video Library

    2008-01-30

    After NASA MESSENGER spacecraft completed its successful flyby of Mercury, the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS, took these images of the receding planet. This is a frame from an animation.

  6. Reconstruction of truncated TCT and SPECT data from a right-angle dual-camera system for myocardial SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.

    1996-12-31

    We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less

  7. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  8. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  9. An effective rectification method for lenselet-based plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Cao, Yiwei; Cai, Weijia; Zheng, Wanlu; Zhou, Ping

    2016-10-01

    The Lenselet-Based Plenoptic has recently drawn a lot of attention in the field of computational photography. The additional information inherent in light field allows a wide range of applications, but some preliminary processing of the raw image is necessary before further operations. In this paper, an effective method is presented for the rotation rectification of the raw image. The rotation is caused by imperfectly position of micro-lens array relative to the sensor plane in commercially available Lytro plenoptic cameras. The key to our method is locating the center of each microlens image, which is projected by a micro-lens. Because of vignetting, the pixel values at centers of the micro-lens image are higher than those at the peripheries. A mask is applied to probe the micro-lens image to locate the center area by finding the local maximum response. The error of the center coordinate estimate is corrected and the angle of rotation is computed via a subsequent line fitting. The algorithm is performed on two images captured by different Lytro cameras. The angles of rotation are -0.3600° and -0.0621° respectively and the rectified raw image is useful and reliable for further operations, such as extraction of the sub-aperture images. The experimental results demonstrate that our method is efficient and accurate.

  10. JMISR INteractive eXplorer

    NASA Technical Reports Server (NTRS)

    Nelson, David L.; Diner, David J.; Thompson, Charles K.; Hall, Jeffrey R.; Rheingans, Brian E.; Garay, Michael J.; Mazzoni, Dominic

    2010-01-01

    MISR (Multi-angle Imaging SpectroRadiometer) INteractive eXplorer (MINX) is an interactive visualization program that allows a user to digitize smoke, dust, or volcanic plumes in MISR multiangle images, and automatically retrieve height and wind profiles associated with those plumes. This innovation can perform 9-camera animations of MISR level-1 radiance images to study the 3D relationships of clouds and plumes. MINX also enables archiving MISR aerosol properties and Moderate Resolution Imaging Spectroradiometer (MODIS) fire radiative power along with the heights and winds. It can correct geometric misregistration between cameras by correlating off-nadir camera scenes with corresponding nadir scenes and then warping the images to minimize the misregistration offsets. Plots of BRF (bidirectional reflectance factor) vs. camera angle for points clicked in an image can be displayed. Users get rapid access to map views of MISR path and orbit locations and overflight dates, and past or future orbits can be identified that pass over a specified location at a specified time. Single-camera, level-1 radiance data at 1,100- or 275- meter resolution can be quickly displayed in color using a browse option. This software determines the heights and motion vectors of features above the terrain with greater precision and coverage than previous methods, based on an algorithm that takes wind direction into consideration. Human interpreters can precisely identify plumes and their extent, and wind direction. Overposting of MODIS thermal anomaly data aids in the identification of smoke plumes. The software has been used to preserve graphical and textural versions of the digitized data in a Web-based database.

  11. Spheres of Earth: An Introduction to Making Observations of Earth Using an Earth System's Science Approach. Student Guide

    NASA Technical Reports Server (NTRS)

    Graff, Paige Valderrama; Baker, Marshalyn (Editor); Graff, Trevor (Editor); Lindgren, Charlie (Editor); Mailhot, Michele (Editor); McCollum, Tim (Editor); Runco, Susan (Editor); Stefanov, William (Editor); Willis, Kim (Editor)

    2010-01-01

    Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA's Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA s Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (approx.185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. There are four major systems or spheres of Earth. They are: Atmosphere, Biosphere, Hydrosphe, and Litho/Geosphere.

  12. Miniature Wide-Angle Lens for Small-Pixel Electronic Camera

    NASA Technical Reports Server (NTRS)

    Mouroulils, Pantazis; Blazejewski, Edward

    2009-01-01

    A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.

  13. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  14. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback

  15. Cartography of the Luna-21 landing site and Lunokhod-2 traverse area based on Lunar Reconnaissance Orbiter Camera images and surface archive TV-panoramas

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kozlova, N. A.; Kokhanov, A. A.; Zubarev, A. E.; Nadezhdina, I. E.; Patratiy, V. D.; Konopikhin, A. A.; Basilevsky, A. T.; Abdrakhimov, A. M.; Oberst, J.; Haase, I.; Jolliff, B. L.; Plescia, J. B.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) system consists of a Wide Angle Camera (WAC) and Narrow Angle Camera (NAC). NAC images (∼0.5 to 1.7 m/pixel) reveal details of the Luna-21 landing site and Lunokhod-2 traverse area. We derived a Digital Elevation Model (DEM) and an orthomosaic for the study region using photogrammetric stereo processing techniques with NAC images. The DEM and mosaic allowed us to analyze the topography and morphology of the landing site area and to map the Lunokhod-2 rover route. The total range of topographic elevation along the traverse was found to be less than 144 m; and the rover encountered slopes of up to 20°. With the orthomosaic tied to the lunar reference frame, we derived coordinates of the Lunokhod-2 landing module and overnight stop points. We identified the exact rover route by following its tracks and determined its total length as 39.16 km, more than was estimated during the mission (37 km), which until recently was a distance record for planetary robotic rovers held for more than 40 years.

  16. Surface compositional variation on the comet 67P/Churyumov-Gerasimenko by OSIRIS data

    NASA Astrophysics Data System (ADS)

    Barucci, M. A.; Fornasier, S.; Feller, C.; Perna, D.; Hasselmann, H.; Deshapriya, J. D. P.; Fulchignoni, M.; Besse, S.; Sierks, H.; Forgia, F.; Lazzarin, M.; Pommerol, A.; Oklay, N.; Lara, L.; Scholten, F.; Preusker, F.; Leyrat, C.; Pajola, M.; Osiris-Rosetta Team

    2015-10-01

    Since the Rosetta mission arrived at the comet 67P/Churyumov-Gerasimenko (67/P C-G) on July 2014, the comet nucleus has been mapped by both OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System, [1]) NAC (Narrow Angle Camera) and WAC (Wide Angle Camera) acquiring a huge quantity of surface's images at different wavelength bands, under variable illumination conditions and spatial resolution, and producing the most detailed maps at the highest spatial resolution of a comet nucleus surface.67/P C-G's nucleus shows an irregular bi-lobed shape of complex morphology with terrains showing intricate features [2, 3] and a heterogeneity surface at different scales.

  17. Curiosity Rover View of Alluring Martian Geology Ahead

    NASA Image and Video Library

    2015-08-05

    A southward-looking panorama combining images from both cameras of the Mast Camera Mastcam instrument on NASA Curiosity Mars Rover shows diverse geological textures on Mount Sharp. A southward-looking panorama combining images from both cameras of the Mast Camera (Mastcam) instrument on NASA's Curiosity Mars Rover shows diverse geological textures on Mount Sharp. Three years after landing on Mars, the mission is investigating this layered mountain for evidence about changes in Martian environmental conditions, from an ancient time when conditions were favorable for microbial life to the much-drier present. Gravel and sand ripples fill the foreground, typical of terrains that Curiosity traversed to reach Mount Sharp from its landing site. Outcrops in the midfield are of two types: dust-covered, smooth bedrock that forms the base of the mountain, and sandstone ridges that shed boulders as they erode. Rounded buttes in the distance contain sulfate minerals, perhaps indicating a change in the availability of water when they formed. Some of the layering patterns on higher levels of Mount Sharp in the background are tilted at different angles than others, evidence of complicated relationships still to be deciphered. The scene spans from southeastward at left to southwestward at right. The component images were taken on April 10 and 11, 2015, the 952nd and 953rd Martian days (or sols) since the rover's landing on Mars on Aug. 6, 2012, UTC (Aug. 5, PDT). Images in the central part of the panorama are from Mastcam's right-eye camera, which is equipped with a 100-millimeter-focal-length telephoto lens. Images used in outer portions, including the most distant portions of the mountain in the scene, were taken with Mastcam's left-eye camera, using a wider-angle, 34-millimeter lens. http://photojournal.jpl.nasa.gov/catalog/PIA19803

  18. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  19. MISR Images Forest Fires and Hurricane

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.

    In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.

    When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

    For more information: http://www-misr.jpl.nasa.gov

  20. Object tracking with robotic total stations: Current technologies and improvements based on image data

    NASA Astrophysics Data System (ADS)

    Ehrhart, Matthias; Lienhart, Werner

    2017-09-01

    The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.

  1. [Reliability of retinal imaging screening in retinopathy of prematurity].

    PubMed

    Navarro-Blanco, C; Peralta-Calvo, J; Pastora-Salvador, N; Alvarez-Rementería, L; Chamorro, E; Sánchez-Ramos, C

    2014-09-01

    The retinopathy of prematurity (ROP) is a potentially avoidable cause of blindness in children. The advances in neonatal care make the survival of extremely premature infants, who show a greater incidence of the disease, possible. The aim of the study is to evaluate the reliability of ROP screening using retinography imaging with the RetCam 3 wide-angle camera and also study the variability of ROP diagnosis depending on the evaluator. The indirect ophthalmoscopy exam was performed by a Pediatric ROP-Expert Ophthalmologist. The same ophthalmologist and a technician specialized in digital image capture took retinal images using the RetCam 3 wide-angle camera. A total of 30 image sets were analyzed by 3 masked groups: group A (8 ophthalmologists), group B (5 experts in vision), and group C (2 ROP-expert ophthalmologists). According to the diagnosis using indirect ophthalmoscopy, the sensitivity (26-93), Kappa (0.24-0.80), and the percent agreement were statistically significant in group C for the diagnosis of ROP Type 1. In the diagnosis of ROP Type 1+Type 2, Kappa (0.17-0.33) and the percent agreement (58-90) were statistically significant, with higher values in group C. The diagnosis, carried out by ROP-expert ophthalmologists, using the wide-angle camera RetCam 3 has proved to be a reliable method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  2. Reflecting on Icy Rhea

    NASA Image and Video Library

    2009-11-03

    Bright sunlight on Rhea shows off the cratered surface of Saturn second largest moon in this image captured by NASA Cassini Orbiter. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 21, 2009.

  3. Clementine Observes the Moon, Solar Corona, and Venus

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1994, during its flight, the Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon.

    In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame.

  4. Wide Angle Movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief movie illustrates the passage of the Moon through the Saturn-bound Cassini spacecraft's wide-angle camera field of view as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. From beginning to end of the sequence, 25 wide-angle images (with a spatial image scale of about 14 miles per pixel (about 23 kilometers)were taken over the course of 7 and 1/2 minutes through a series of narrow and broadband spectral filters and polarizers, ranging from the violet to the near-infrared regions of the spectrum, to calibrate the spectral response of the wide-angle camera. The exposure times range from 5 milliseconds to 1.5 seconds. Two of the exposures were smeared and have been discarded and replaced with nearby images to make a smooth movie sequence. All images were scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is approximately the same in every image. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS)at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  5. Non-uniform refractive index field measurement based on light field imaging technique

    NASA Astrophysics Data System (ADS)

    Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong

    2018-02-01

    In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.

  6. A Summer View of Russia's Lena Delta and Olenek

    NASA Technical Reports Server (NTRS)

    2004-01-01

    These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center).

    The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is darker green in the natural color nadir image, also appears to exhibit a faint greenish hue in the multi-angle composite. A possible explanation for this subtle green effect is that the taiga forest trees (or dwarf-shrubs) are not too dense here. Since the the nadir camera is more likly to observe any gaps between the trees or shrubs, and since the vegetation is not as bright (in the red band) as the underlying soil or surface, the brighter underlying surface results in an area that is relatively brighter at the nadir view angle. Accurate maps of vegetation structural units are an essential part of understanding the seasonal exchanges of energy and water at the Earth's surface, and of preserving the biodiversity in these regions.

    The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 24273. The panels cover an area of about 230 kilometers x 420 kilometers, and utilize data from blocks 30 to 34 within World Reference System-2 path 134.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  7. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  8. Study on the measurement system of the target polarization characteristics and test

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Zhu, Yong; Zhang, Su; Duan, Jin; Yang, Di; Zhan, Juntong; Wang, Xiaoman; Jiang, Hui-Lin

    2015-10-01

    The polarization imaging detection technology increased the polarization information on the basis of the intensity imaging, which is extensive application in the military and civil and other fields, the research on the polarization characteristics of target is particularly important. The research of the polarization reflection model was introduced in this paper, which describes the scattering vector light energy distribution in reflecting hemisphere polarization characteristics, the target polarization characteristics test system solutions was put forward, by the irradiation light source, measuring turntable and camera, etc, which illuminate light source shall direct light source, with laser light sources and xenon lamp light source, light source can be replaced according to the test need; Hemispherical structure is used in measuring circumarotate placed near its base material sample, equipped with azimuth and pitching rotation mechanism, the manual in order to adjust the azimuth Angle and high Angle observation; Measuring camera pump works, through the different in the way of motor control polaroid polarization test, to ensure the accuracy of measurement and imaging resolution. The test platform has set up by existing laboratory equipment, the laser is 532 nm, line polaroid camera, at the same time also set the sending and receiving optical system. According to the different materials such as wood, metal, plastic, azimuth Angle and zenith Angle in different observation conditions, measurement of target in the polarization scattering properties of different exposure conditions, implementation of hemisphere space pBRDF measurement.

  9. Measurement of cosmic-ray muons with the Distributed Electronic Cosmic-ray Observatory, a network of smartphones

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, J.; BenZvi, S.; Bravo, S.; Jensen, K.; Karn, P.; Meehan, M.; Peacock, J.; Plewa, M.; Ruggles, T.; Santander, M.; Schultz, D.; Simons, A. L.; Tosi, D.

    2016-04-01

    Solid-state camera image sensors can be used to detect ionizing radiation in addition to optical photons. We describe the Distributed Electronic Cosmic-ray Observatory (DECO), an app and associated public database that enables a network of consumer devices to detect cosmic rays and other ionizing radiation. In addition to terrestrial background radiation, cosmic-ray muon candidate events are detected as long, straight tracks passing through multiple pixels. The distribution of track lengths can be related to the thickness of the active (depleted) region of the camera image sensor through the known angular distribution of muons at sea level. We use a sample of candidate muon events detected by DECO to measure the thickness of the depletion region of the camera image sensor in a particular consumer smartphone model, the HTC Wildfire S. The track length distribution is fit better by a cosmic-ray muon angular distribution than an isotropic distribution, demonstrating that DECO can detect and identify cosmic-ray muons despite a background of other particle detections. Using the cosmic-ray distribution, we measure the depletion thickness to be 26.3 ± 1.4 μm. With additional data, the same method can be applied to additional models of image sensor. Once measured, the thickness can be used to convert track length to incident polar angle on a per-event basis. Combined with a determination of the incident azimuthal angle directly from the track orientation in the sensor plane, this enables direction reconstruction of individual cosmic-ray events using a single consumer device. The results simultaneously validate the use of cell phone camera image sensors as cosmic-ray muon detectors and provide a measurement of a parameter of camera image sensor performance which is not otherwise publicly available.

  10. System of technical vision for autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  11. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    PubMed

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Neptune

    NASA Image and Video Library

    1999-07-25

    This image of Neptune was taken through the clear filter of the narrow-angle camera on July 16, 1989 by NASA Voyager 2 spacecraft. The image was processed by computer to show the newly resolved dark oval feature embedded in the middle of the dusky south

  13. Colorado

    Atmospheric Science Data Center

    2014-05-15

    ... the Multi-angle Imaging SpectroRadiometer (MISR). On the left, a natural-color view acquired by MISR's vertical-viewing (nadir) camera ... Gunnison River at the city of Grand Junction. The striking "L" shaped feature in the lower image center is a sandstone monocline known as ...

  14. The Uses of a Polarimetric Camera

    DTIC Science & Technology

    2008-09-01

    are displayed in this thesis the author used two different lenses . One of the lenses is an ARSAT H 20mm with an F number of 2.8. This lens was used...for all the wide angle images collected. For the telephoto images collected, the author used a NIKKOR 200mm lenses which has an F number of 4.0...16 K. DEGREE OF LINEAR POLARIZATION (DOLP) ..................................17 L. PHASE ANGLE OF POLARIZATION

  15. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.

  16. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  17. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    NASA Astrophysics Data System (ADS)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  18. MISR Scans the Texas-Oklahoma Border

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These MISR images of Oklahoma and north Texas were acquired on March 12, 2000 during Terra orbit 1243. The three images on the left, from top to bottom, are from the 70-degree forward viewing camera, the vertical-viewing (nadir) camera, and the 70-degree aftward viewing camera. The higher brightness, bluer tinge, and reduced contrast of the oblique views result primarily from scattering of sunlight in the Earth's atmosphere, though some color and brightness variations are also due to differences in surface reflection at the different angles. The longer slant path through the atmosphere at the oblique angles also accentuates the appearance of thin, high-altitude cirrus clouds.

    On the right, two areas from the nadir camera image are shown in more detail, along with notations highlighting major geographic features. The south bank of the Red River marks the boundary between Texas and Oklahoma. Traversing brush-covered and grassy plains, rolling hills, and prairies, the Red River and the Canadian River are important resources for farming, ranching, public drinking water, hydroelectric power, and recreation. Both originate in New Mexico and flow eastward, their waters eventually discharging into the Mississippi River.

    A smoke plume to the north of the Ouachita Mountains and east of Lake Eufaula is visible in the detailed nadir imagery. The plume is also very obvious at the 70-degree forward view angle, to the right of center and about one-fourth of the way down from the top of the image.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  19. Use of MEMs and optical sensors for closed loop heliostat control

    NASA Astrophysics Data System (ADS)

    Harper, Paul Julian; Dreijer, Janto; Malan, Karel; Larmuth, James; Gauche, Paul

    2016-05-01

    The Helio 100 project at STERG (Stellenbosch Solar Thermal Research Group) aims to help reduce the cost of Concentrated Solar Thermal plants by deploying large numbers of small (1x2 m) low cost heliostats. One of the methods employed to reduce the cost of the heliostat field is to have a field that requires no site preparation (grading, leveling, vegetation clearance) and no expensive foundations or concrete pouring for each individual heliostat base. This implies that the heliostat pod frames and vertical mounts might be slightly out of vertical, and the normal method of dead reckoning using accurately surveyed and aligned heliostat bases cannot be used. This paper describes a combination of MEMs and optical sensors on the back of the heliostat, that together with a simple machine learning approach, give accurate and reproducible azimuth and elevation information for the heliostat plane. Initial experiments were done with an android phone mounted on the back of a heliostat as it was a readily available platform combining accelerometers' and camera into one programmable package. It was found quite easy to determine the pointing angle of the heliostat to within 1 milliradian using the rear facing camera and correlating known heliostat angles with target image features on the ground. We also tested the accuracy at various image resolutions by halving the image size successively till the feature detection failed. This showed that even a VGA (640x480) resolution image could give mean errors of 1.5 milliradian. The optical technique is exceedingly simple and does not use any camera calibration, angular reconstruction or knowledge of heliostat drive geometry. We also tested the ability of the 3d accelerometers to determine angle, but this was coarser than the camera and only accurate to around 10 milliradians.

  20. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  1. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  2. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  3. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  4. Clementine Observes the Moon, Solar Corona, and Venus

    NASA Image and Video Library

    1999-06-12

    In 1994, during its flight, NASA's Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon. In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame. http://photojournal.jpl.nasa.gov/catalog/PIA00434

  5. Preclinical imaging of iridocorneal angle and fundus using a modified integrated flexible handheld probe

    PubMed Central

    Hong, Xun Jie Jeesmond; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin

    2017-01-01

    Abstract. A flexible handheld imaging probe consisting of a 3  mm×3  mm charge-coupled device camera, light-emitting diode light sources, and near-infrared laser source is designed and developed. The imaging probe is designed with specifications to capture the iridocorneal angle images and posterior segment images. Light propagation from the anterior chamber of the eye to the exterior is considered analytically using Snell’s law. Imaging of the iridocorneal angle region and fundus is performed on ex vivo porcine samples and subsequently on small laboratory animals, such as the New Zealand white rabbit and nonhuman primate, in vivo. The integrated flexible handheld probe demonstrates high repeatability in iridocorneal angle and fundus documentation. The proposed concept and methodology are expected to find potential application in the diagnosis, prognosis, and management of glaucoma. PMID:28413809

  6. Multi-viewer tracking integral imaging system and its viewing zone analysis.

    PubMed

    Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho

    2009-09-28

    We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.

  7. Up Close to Mimas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    This image is a narrow angle clear-filter image which was processed to enhance the contrast in brightness and sharpness of visible features.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of this image.

    This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  8. Dual-illumination mode, wide-field probe imaging scheme for imaging irido-corneal angle region inside eye

    NASA Astrophysics Data System (ADS)

    Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin

    2015-07-01

    Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.

  9. In-Flight performance of MESSENGER's Mercury dual imaging system

    USGS Publications Warehouse

    Hawkins, S.E.; Murchie, S.L.; Becker, K.J.; Selby, C.M.; Turner, F.S.; Noble, M.W.; Chabot, N.L.; Choo, T.H.; Darlington, E.H.; Denevi, B.W.; Domingue, D.L.; Ernst, C.M.; Holsclaw, G.M.; Laslo, N.R.; Mcclintock, W.E.; Prockter, L.M.; Robinson, M.S.; Solomon, S.C.; Sterner, R.E.

    2009-01-01

    The Mercury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft, launched in August 2004 and planned for insertion into orbit around Mercury in 2011, has already completed two flybys of the innermost planet. The Mercury Dual Imaging System (MDIS) acquired nearly 2500 images from the first two flybys and viewed portions of Mercury's surface not viewed by Mariner 10 in 1974-1975. Mercury's proximity to the Sun and its slow rotation present challenges to the thermal design for a camera on an orbital mission around Mercury. In addition, strict limitations on spacecraft pointing and the highly elliptical orbit create challenges in attaining coverage at desired geometries and relatively uniform spatial resolution. The instrument designed to meet these challenges consists of dual imagers, a monochrome narrow-angle camera (NAC) with a 1.5?? field of view (FOV) and a multispectral wide-angle camera (WAC) with a 10.5?? FOV, co-aligned on a pivoting platform. The focal-plane electronics of each camera are identical and use a 1024??1024 charge-coupled device detector. The cameras are passively cooled but use diode heat pipes and phase-change-material thermal reservoirs to maintain the thermal configuration during the hot portions of the orbit. Here we present an overview of the instrument design and how the design meets its technical challenges. We also review results from the first two flybys, discuss the quality of MDIS data from the initial periods of data acquisition and how that compares with requirements, and summarize how in-flight tests are being used to improve the quality of the instrument calibration. ?? 2009 SPIE.

  10. Docking alignment system

    NASA Technical Reports Server (NTRS)

    Monford, Leo G. (Inventor)

    1990-01-01

    Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.

  11. Improved docking alignment system

    NASA Technical Reports Server (NTRS)

    Monford, Leo G. (Inventor)

    1988-01-01

    Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.

  12. Evaluation of modified portable digital camera for screening of diabetic retinopathy.

    PubMed

    Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi

    2009-01-01

    To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.

  13. Angle of sky light polarization derived from digital images of the sky under various conditions.

    PubMed

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Yang, Yi; Ning, Yu

    2017-01-20

    Skylight polarization is used for navigation by some birds and insects. Skylight polarization also has potential for human navigation applications. Its advantages include relative immunity from interference and the absence of error accumulation over time. However, there are presently few examples of practical applications for polarization navigation technology. The main reason is its weak robustness during cloudy weather conditions. In this paper, the real-time measurement of the sky light polarization pattern across the sky has been achieved with a wide field of view camera. The images were processed under a new reference coordinate system to clearly display the symmetrical distribution of angle of polarization with respect to the solar meridian. A new algorithm for the extraction of the image axis of symmetry is proposed, in which the real-time azimuth angle between the camera and the solar meridian is accurately calculated. Our experimental results under different weather conditions show that polarization navigation has high accuracy, is strongly robust, and performs well during fog and haze, clouds, and strong sunlight.

  14. The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.

    ERIC Educational Resources Information Center

    Orr, Quinn

    This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…

  15. Thermal Texture Selection and Correction for Building Facade Inspection Based on Thermal Radiant Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.

    2018-05-01

    An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.

  16. SU-F-J-200: An Improved Method for Event Selection in Compton Camera Imaging for Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackin, D; Beddar, S; Polf, J

    2016-06-15

    Purpose: The uncertainty in the beam range in particle therapy limits the conformality of the dose distributions. Compton scatter cameras (CC), which measure the prompt gamma rays produced by nuclear interactions in the patient tissue, can reduce this uncertainty by producing 3D images confirming the particle beam range and dose delivery. However, the high intensity and short time windows of the particle beams limit the number of gammas detected. We attempt to address this problem by developing a method for filtering gamma ray scattering events from the background by applying the known gamma ray spectrum. Methods: We used a 4more » stage Compton camera to record in list mode the energy deposition and scatter positions of gammas from a Co-60 source. Each CC stage contained a 4×4 array of CdZnTe crystal. To produce images, we used a back-projection algorithm and four filtering Methods: basic, energy windowing, delta energy (ΔE), or delta scattering angle (Δθ). Basic filtering requires events to be physically consistent. Energy windowing requires event energy to fall within a defined range. ΔE filtering selects events with the minimum difference between the measured and a known gamma energy (1.17 and 1.33 MeV for Co-60). Δθ filtering selects events with the minimum difference between the measured scattering angle and the angle corresponding to a known gamma energy. Results: Energy window filtering reduced the FWHM from 197.8 mm for basic filtering to 78.3 mm. ΔE and Δθ filtering achieved the best results, FWHMs of 64.3 and 55.6 mm, respectively. In general, Δθ filtering selected events with scattering angles < 40°, while ΔE filtering selected events with angles > 60°. Conclusion: Filtering CC events improved the quality and resolution of the corresponding images. ΔE and Δθ filtering produced similar results but each favored different events.« less

  17. Reflective Filters Design for Self-Filtering Narrowband Ultraviolet Imaging Experiment Wide-Field Surveys (NUVIEWS) Project

    NASA Technical Reports Server (NTRS)

    Park, Jung- Ho; Kim, Jongmin; Zukic, Muamer; Torr, Douglas G.

    1994-01-01

    We report the design of multilayer reflective filters for the self-filtering cameras of the NUVIEWS project. Wide angle self-filtering cameras were designed to image the C IV (154.9 nm) line emission, and H2 Lyman band fluorescence (centered at 161 nm) over a 20 deg x 30 deg field of view. A key element of the filter design includes the development of pi-multilayers optimized to provide maximum reflectance at 154.9 nm and 161 nm for the respective cameras without significant spectral sensitivity to the large cone angle of the incident radiation. We applied self-filtering concepts to design NUVIEWS telescope filters that are composed of three reflective mirrors and one folding mirror. The filters with narrowband widths of 6 and 8 rim at 154.9 and 161 nm, respectively, have net throughputs of more than 50 % with average blocking of out-of-band wavelengths better than 3 x 10(exp -4)%.

  18. Dust mass distribution around comet 67P/Churyumov-Gerasimenko determined via parallax measurements using Rosetta's OSIRIS cameras

    NASA Astrophysics Data System (ADS)

    Ott, T.; Drolshagen, E.; Koschny, D.; Güttler, C.; Tubiana, C.; Frattin, E.; Agarwal, J.; Sierks, H.; Bertini, I.; Barbieri, C.; Lamy, P. I.; Rodrigo, R.; Rickman, H.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; Deller, J.; Feller, C.; Fornasier, S.; Fulle, M.; Geiger, B.; Gicquel, A.; Groussin, O.; Gutiérrez, P. J.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Keller, H. U.; Knollenberg, J.; Kovacs, G.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Lin, Z.-Y.; López-Moreno, J. J.; Marzari, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Shi, X.; Thomas, N.; Vincent, J.-B.; Poppe, B.

    2017-07-01

    The OSIRIS (optical, spectroscopic and infrared remote imaging system) instrument on board the ESA Rosetta spacecraft collected data of 67P/Churyumov-Gerasimenko for over 2 yr. OSIRIS consists of two cameras, a Narrow Angle Camera and a Wide Angle Camera. For specific imaging sequences related to the observation of dust aggregates in 67P's coma, the two cameras were operating simultaneously. The two cameras are mounted 0.7 m apart from each other, as a result this baseline yields a parallax shift of the apparent particle trails on the analysed images directly proportional to their distance. Thanks to such shifts, the distance between observed dust aggregates and the spacecraft was determined. This method works for particles closer than 6000 m to the spacecraft and requires very few assumptions. We found over 250 particles in a suitable distance range with sizes of some centimetres, masses in the range of 10-6-102 kg and a mean velocity of about 2.4 m s-1 relative to the nucleus. Furthermore, the spectral slope was analysed showing a decrease in the median spectral slope of the particles with time. The further a particle is from the spacecraft the fainter is its signal. For this reason, this was counterbalanced by a debiasing. Moreover, the dust mass-loss rate of the nucleus could be computed as well as the Afρ of the comet around perihelion. The summed-up dust mass-loss rate for the mass bins 10-4-102 kg is almost 8300 kg s-1.

  19. The Days Dwindle Down to a Precious Few

    NASA Image and Video Library

    2015-04-27

    This image is located just inside the southern rim of Chong Chol crater and was obtained on April 25, 2015, the day following NASA MESSENGER final orbital correction maneuver. The spacecraft fuel tanks are now completely empty, and there is no means to prevent the Sun's gravity from pulling MESSENGER's orbit closer and closer to the surface of Mercury. Impact is expected to occur on April 30, 2015. The image is located just inside the southern rim of Chong Chol crater, named for a Korean poet of the 1500s. It is challenging to obtain good images when the spacecraft is very low above the planet, because of the high speed at which the camera's field of view is moving across the surface. Very short exposure times are used to limit smear, and this image was binned from its original size of 1024 x 1024 pixels to 512 x 512 to improve the image quality. The title of today's image is a line from "September Song" (composed by Kurt Weill, with lyrics by Maxwell Anderson. The song was subsequently covered by artists including Ian McCulloch of Echo & the Bunnymen, Lou Reed, and Bryan Ferry). Date acquired: April 25, 2015 Image Mission Elapsed Time (MET): 72264694 Image ID: 8392292 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 45.43° N Center Longitude: 298.62° E Resolution: 2.1 meters/pixel Scale: The scene is about 2.1 km (1.3 miles) across. This image has not been map projected. Incidence Angle: 69.9° Emission Angle: 20.1° Phase Angle: 90.0° http://photojournal.jpl.nasa.gov/catalog/PIA19436

  20. MESSENGER Final Image

    NASA Image and Video Library

    2015-04-30

    Today, the MESSENGER spacecraft sent its final image. Originally planned to orbit Mercury for one year, the mission exceeded all expectations, lasting for over four years and acquiring extensive datasets with its seven scientific instruments and radio science investigation. This afternoon, the spacecraft succumbed to the pull of solar gravity and impacted Mercury's surface. The image shown here is the last one acquired and transmitted back to Earth by the mission. The image is located within the floor of the 93-kilometer-diameter crater Jokai. The spacecraft struck the planet just north of Shakespeare basin. Date acquired: April 30, 2015 Image Mission Elapsed Time (MET): 72716050 Image ID: 8422953 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 72.0° Center Longitude: 223.8° E Resolution: 2.1 meters/pixel Scale: This image is about 1 kilometers (0.6 miles) across Incidence Angle: 57.9° Emission Angle: 56.5° Phase Angle: 40.7° http://photojournal.jpl.nasa.gov/catalog/PIA19448

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  2. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Astrophysics Data System (ADS)

    Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  3. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Technical Reports Server (NTRS)

    Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather

    2015-01-01

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  4. High-speed and high-resolution quantitative phase imaging with digital-micromirror device-based illumination (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; Jin, Di; Yaqoob, Zahid; So, Peter T. C.

    2017-02-01

    Due to the large number of available mirrors, the patterning speed, low-cost, and compactness, digital-micromirror devices (DMDs) have been extensively used in biomedical imaging system. Recently, DMDs have been brought to the quantitative phase microscopy (QPM) field to achieve synthetic-aperture imaging and tomographic imaging. Last year, our group demonstrated using DMD for QPM, where the phase-retrieval is based on a recently developed Fourier ptychography algorithm. In our previous system, the illumination angle was varied through coding the aperture plane of the illumination system, which has a low efficiency on utilizing the laser power. In our new DMD-based QPM system, we use the Lee-holograms, which is conjugated to the sample plane, to change the illumination angles for much higher power efficiency. Multiple-angle illumination can also be achieved with this method. With this versatile system, we can achieve FPM-based high-resolution phase imaging with 250 nm lateral resolution using the Rayleigh criteria. Due to the use of a powerful laser, the imaging speed would only be limited by the camera acquisition speed. With a fast camera, we expect to achieve close to 100 fps phase imaging speed that has not been achieved in current FPM imaging systems. By adding reference beam, we also expect to achieve synthetic-aperture imaging while directly measuring the phase of the sample fields. This would reduce the phase-retrieval processing time to allow for real-time imaging applications in the future.

  5. Airborne Sea of Dust over China

    NASA Technical Reports Server (NTRS)

    2002-01-01

    TDust covered northern China in the last week of March during some of the worst dust storms to hit the region in a decade. The dust obscuring China's Inner Mongolian and Shanxi Provinces on March 24, 2002, is compared with a relatively clear day (October 31, 2001) in these images from the Multi-angle Imaging SpectroRadiometer's vertical-viewing (nadir) camera aboard NASA's Terra satellite. Each image represents an area of about 380 by 630 kilometers (236 by 391 miles). In the image from late March, shown on the right, wave patterns in the yellowish cloud liken the storm to an airborne ocean of dust. The veil of particulates obscures features on the surface north of the Yellow River (visible in the lower left). The area shown lies near the edge of the Gobi desert, a few hundred kilometers, or miles, west of Beijing. Dust originates from the desert and travels east across northern China toward the Pacific Ocean. For especially severe storms, fine particles can travel as far as North America. The Multi-angle Imaging SpectroRadiometer, built and managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., is one of five Earth-observing instruments aboard the Terra satellite, launched in December 1999. The instrument acquires images of Earth at nine angles simultaneously, using nine separate cameras pointed forward, downward and backward along its flight path. The change in reflection at different view angles affords the means to distinguish different types of atmospheric particles, cloud forms and land surface covers. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team

  6. Clouds over Tharsis

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48.

    Figure caption from Science Magazine

  7. Rhea and Her Craters

    NASA Image and Video Library

    2005-01-17

    This Cassini image shows predominantly the impact-scarred leading hemisphere of Saturn's icy moon Rhea (1,528 kilometers, or 949 miles across). The image was taken in visible light with the Cassini spacecraft narrow angle camera on Dec. 12, 2004, at a distance of 2 million kilometers (1.2 million miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 30 degrees. The image scale is about 12 kilometers (7.5 miles) per pixel. The image has been magnified by a factor of two and contrast enhanced to aid visibility. http://photojournal.jpl.nasa.gov/catalog/PIA06564

  8. Low-altitude photographic transects of the Arctic Network of National Park Units and Selawik National Wildlife Refuge, Alaska, July 2013

    USGS Publications Warehouse

    Marcot, Bruce G.; Jorgenson, M. Torre; DeGange, Anthony R.

    2014-01-01

    5. A Canon® Rebel 3Ti with a Sigma zoom lens (18–200 mm focal length). The Drift® HD-170 and GoPro® Hero3 cameras were secured to the struts and underwing for nadir (direct downward) imaging. The Panasonic® and Canon® cameras were each hand-held for oblique-angle landscape images, shooting through the airplanes’ windows, targeting both general landscape conditions as well as landscape features of special interest, such as tundra fire scars and landslips. The Drift® and GoPro® cameras each were set for time-lapse photography at 5-second intervals for overlapping coverage. Photographs from all cameras (100 percent .jpg format) were date- and time-synchronized to geographic positioning system waypoints taken during the flights, also at 5-second intervals, providing precise geotagging (latitude-longitude) of all files. All photographs were adjusted for color saturation and gamma, and nadir photographs were corrected for lens distortion for the Drift® and GoPro® cameras’ 170° wide-angle distortion. EXIF (exchangeable image file format) data on camera settings and geotagging were extracted into spreadsheet databases. An additional 1 hour, 20 minutes, and 43 seconds of high-resolution videos were recorded at 60 frames per second with the GoPro® camera along selected transect segments, and also were image-adjusted and corrected for lens distortion. Geotagged locations of 12,395 nadir photographs from the Drift® and GoPro® cameras were overlayed in a geographic information system (ArcMap 10.0) onto a map of 44 ecotypes (land- and water-cover types) of the Arctic Network study area. Presence and area of each ecotype occurring within a geographic information system window centered on the location of each photograph were recorded and included in the spreadsheet databases. All original and adjusted photographs, videos, geographic positioning system flight tracks, and photograph databases are available by contacting ascweb@usgs.gov.

  9. Evaluation of Timepix3 based CdTe photon counting detector for fully spectroscopic small animal SPECT imaging

    NASA Astrophysics Data System (ADS)

    Trojanova, E.; Jakubek, J.; Turecek, D.; Sykora, V.; Francova, P.; Kolarova, V.; Sefc, L.

    2018-01-01

    The imaging method of SPECT (Single Photon Emission Computed Tomography) is used in nuclear medicine for diagnostics of various diseases or organs malfunctions. The distribution of medically injected, inhaled, or ingested radionuclides (radiotracers) in the patient body is imaged using gamma-ray sensitive camera with suitable imaging collimator. The 3D image is then calculated by combining many images taken from different observation angles. Most of SPECT systems use scintillator based cameras. These cameras do not provide good energy resolution and do not allow efficient suppression of unwanted signals such as those caused by Compton scattering. The main goal of this work is evaluation of Timepix3 detector properties for SPECT method for functional imaging of small animals during preclinical studies. Advantageous Timepix3 properties such as energy and spatial resolution are exploited for significant image quality improvement. Preliminary measurements were performed on specially prepared plastic phantom with cavities filled by radioisotopes and then repeated with in vivo mouse sample.

  10. Imaging Dot Patterns for Measuring Gossamer Space Structures

    NASA Technical Reports Server (NTRS)

    Dorrington, A. A.; Danehy, P. M.; Jones, T. W.; Pappa, R. S.; Connell, J. W.

    2005-01-01

    A paper describes a photogrammetric method for measuring the changing shape of a gossamer (membrane) structure deployed in outer space. Such a structure is typified by a solar sail comprising a transparent polymeric membrane aluminized on its Sun-facing side and coated black on the opposite side. Unlike some prior photogrammetric methods, this method does not require an artificial light source or the attachment of retroreflectors to the gossamer structure. In a basic version of the method, the membrane contains a fluorescent dye, and the front and back coats are removed in matching patterns of dots. The dye in the dots absorbs some sunlight and fluoresces at a longer wavelength in all directions, thereby enabling acquisition of high-contrast images from almost any viewing angle. The fluorescent dots are observed by one or more electronic camera(s) on the Sun side, the shade side, or both sides. Filters that pass the fluorescent light and suppress most of the solar spectrum are placed in front of the camera(s) to increase the contrast of the dots against the background. The dot image(s) in the camera(s) are digitized, then processed by use of commercially available photogrammetric software.

  11. Wild 2 Close Look

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows the comet Wild 2, which NASA's Stardust spacecraft flew by on Jan. 2, 2004. This image is the closest short exposure of the comet, taken at an11.4-degree phase angle, the angle between the camera, comet and the Sun. The listed names on the diagram (see Figure 1) are those used by the Stardust team to identify features. 'Basin' does not imply an impact origin.

  12. Spherical visual system for real-time virtual reality and surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Su-Shing

    1998-12-01

    A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.

  13. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  14. Digital video system for on-line portal verification

    NASA Astrophysics Data System (ADS)

    Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott

    1990-07-01

    A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.

  15. Payload topography camera of Chang'e-3

    NASA Astrophysics Data System (ADS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-11-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application.

  16. Neptune in False Color

    NASA Image and Video Library

    1996-01-29

    In this false color image of Neptune, objects that are deep in the atmosphere are blue, while those at higher altitudes are white. The image was taken by Voyager 2 wide-angle camera through an orange filter and two different methane filters. http://photojournal.jpl.nasa.gov/catalog/PIA00051

  17. An Unusual View: MISR sees the Moon

    NASA Image and Video Library

    2017-08-17

    The job of the Multiangle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite is to view Earth. For more than 17 years, its nine cameras have stared downward 24 hours a day, faithfully collecting images used to study Earth's surface and atmosphere. On August 5, however, MISR captured some very unusual data as the Terra satellite performed a backflip in space. This maneuver was performed to allow MISR and the other instruments on Terra to catch a glimpse of the Moon, something that has been done only once before, in 2003. Why task an elderly satellite with such a radical maneuver? Since we can be confident that the Moon's brightness has remained very constant over the mission, MISR's images of the Moon can be used as a check of the instrument's calibration, allowing an independent verification of the procedures used to correct the images for any changes the cameras have experienced over their many years in space. If changes in the cameras' responses to light aren't properly accounted for, the images captured by MISR would make it appear as if Earth were growing darker or lighter, which would throw off scientists' efforts to characterize air pollution, cloud cover and Earth's climate. Because of this, the MISR team uses several methods to calibrate the data, all of which involve imaging something with a known (or independently measured) brightness and correcting the images to match that brightness. Every month, MISR views two panels of a special material called Spectralon, which reflects sunlight in a very particular way, onboard the instrument. Periodically, this calibration is checked by a field team who measures the brightness of a flat, uniformly colored surface on Earth, usually a dry desert lakebed, as MISR flies overhead. The lunar maneuver offers a third opportunity to check the brightness calibration of MISR's images. While viewing Earth, MISR's cameras are fixed at nine different angles, with one (called An) pointed straight down, four canted forwards (Af, Bf, Cf, and Df) and four angled backwards (Aa, Ba, Ca, and Da). The A, B, C, and D cameras have different focal lengths, with the most oblique (D) cameras having the longest focal lengths in order to preserve spatial resolution on the ground. During the lunar maneuver, however, the spacecraft rotated so that each camera saw the almost-full Moon straight on. This means that the different focal lengths produce images with different resolutions. The D cameras produce the sharpest images. These grayscale images were made with raw data from the red spectral band of each camera. Because the spacecraft is constantly rotating while these images were taken, the images are "smeared" in the vertical direction, producing an oval-shaped Moon. These have been corrected to restore the Moon to its true circular shape. https://photojournal.jpl.nasa.gov/catalog/PIA21876

  18. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.

  19. Clouds over Tharsis

    NASA Image and Video Library

    1998-03-13

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48. http://photojournal.jpl.nasa.gov/catalog/PIA00812

  20. Addressing challenges of modulation transfer function measurement with fisheye lens cameras

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura

    2015-03-01

    Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.

  1. STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103

    NASA Image and Video Library

    1990-04-29

    STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.

  2. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  3. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  4. Up Close and Personal

    NASA Image and Video Library

    2014-05-08

    This image is one of the highest-resolution MDIS observations to date! Many craters of varying degradation states are visible, as well as gentle terrain undulations. Very short exposure times are needed to make these low-altitude observations while the spacecraft is moving quickly over the surface; thus the images are slightly noisier than typical MDIS images. This image was acquired as a high-resolution targeted observation. Targeted observations are images of a small area on Mercury's surface at resolutions much higher than the 200-meter/pixel morphology base map. It is not possible to cover all of Mercury's surface at this high resolution, but typically several areas of high scientific interest are imaged in this mode each week. Date acquired: March 15, 2014 Image Mission Elapsed Time (MET): 37173522 Image ID: 5936740 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 71.91° Center Longitude: 232.7° E Resolution: 5 meters/pixel Scale: The image is approximately 8.3 km (5.2 mi.) across. Incidence Angle: 79.4° Emission Angle: 4.0° Phase Angle: 83.4° http://photojournal.jpl.nasa.gov/catalog/PIA18370

  5. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  6. Measurement of an Evaporating Drop on a Reflective Substrate

    NASA Technical Reports Server (NTRS)

    Chao, David F.; Zhang, Nengli

    2004-01-01

    A figure depicts an apparatus that simultaneously records magnified ordinary top-view video images and laser shadowgraph video images of a sessile drop on a flat, horizontal substrate that can be opaque or translucent and is at least partially specularly reflective. The diameter, contact angle, and rate of evaporation of the drop as functions of time can be calculated from the apparent diameters of the drop in sequences of the images acquired at known time intervals, and the shadowgrams that contain flow patterns indicative of thermocapillary convection (if any) within the drop. These time-dependent parameters and flow patterns are important for understanding the physical processes involved in the spreading and evaporation of drops. The apparatus includes a source of white light and a laser (both omitted from the figure), which are used to form the ordinary image and the shadowgram, respectively. Charge-coupled-device (CCD) camera 1 (with zoom) acquires the ordinary video images, while CCD camera 2 acquires the shadowgrams. With respect to the portion of laser light specularly reflected from the substrate, the drop acts as a plano-convex lens, focusing the laser beam to a shadowgram on the projection screen in front of CCD camera 2. The equations for calculating the diameter, contact angle, and rate of evaporation of the drop are readily derived on the basis of Snell s law of refraction and the geometry of the optics.

  7. Height and Motion of the Chikurachki Eruption Plume

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The height and motion of the ash and gas plume from the April 22, 2003, eruption of the Chikurachki volcano is portrayed in these views from the Multi-angle Imaging SpectroRadiometer (MISR). Situated within the northern portion of the volcanically active Kuril Island group, the Chikurachki volcano is an active stratovolcano on Russia's Paramushir Island (just south of the Kamchatka Peninsula).

    In the upper panel of the still image pair, this scene is displayed as a natural-color view from MISR's vertical-viewing (nadir) camera. The white and brownish-grey plume streaks several hundred kilometers from the eastern edge of Paramushir Island toward the southeast. The darker areas of the plume typically indicate volcanic ash, while the white portions of the plume indicate entrained water droplets and ice. According to the Kamchatkan Volcanic Eruptions Response Team (KVERT), the temperature of the plume near the volcano on April 22 was -12o C.

    The lower panel shows heights derived from automated stereoscopic processing of MISR's multi-angle imagery, in which the plume is determined to reach heights of about 2.5 kilometers above sea level. Heights for clouds above and below the eruption plume were also retrieved, including the high-altitude cirrus clouds in the lower left (orange pixels). The distinctive patterns of these features provide sufficient spatial contrast for MISR's stereo height retrieval to perform automated feature matching between the images acquired at different view angles. Places where clouds or other factors precluded a height retrieval are shown in dark gray.

    The multi-angle 'fly-over' animation (below) allows the motion of the plume and of the surrounding clouds to be directly observed. The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with the view from the 70-degree backward camera.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17776. The panels cover an area of approximately 296 kilometers x 216 kilometers (still images) and 185 kilometers x 154 kilometers (animation), and utilize data from blocks 50 to 51 within World Reference System-2 path 100.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

    [figure removed for brevity, see original site

  8. Laser-Induced-Fluorescence Photogrammetry and Videogrammetry

    NASA Technical Reports Server (NTRS)

    Danehy, Paul; Jones, Tom; Connell, John; Belvin, Keith; Watson, Kent

    2004-01-01

    An improved method of dot-projection photogrammetry and an extension of the method to encompass dot-projection videogrammetry overcome some deficiencies of dot-projection photogrammetry as previously practiced. The improved method makes it possible to perform dot-projection photogrammetry or videogrammetry on targets that have previously not been amenable to dot-projection photogrammetry because they do not scatter enough light. Such targets include ones that are transparent, specularly reflective, or dark. In standard dot-projection photogrammetry, multiple beams of white light are projected onto the surface of an object of interest (denoted the target) to form a known pattern of bright dots. The illuminated surface is imaged in one or more cameras oriented at a nonzero angle or angles with respect to a central axis of the illuminating beams. The locations of the dots in the image(s) contain stereoscopic information on the locations of the dots, and, hence, on the location, shape, and orientation of the illuminated surface of the target. The images are digitized and processed to extract this information. Hardware and software to implement standard dot-projection photogrammetry are commercially available. Success in dot-projection photogrammetry depends on achieving sufficient signal-to-noise ratios: that is, it depends on scattering of enough light by the target so that the dots as imaged in the camera(s) stand out clearly against the ambient-illumination component of the image of the target. In one technique used previously to increase the signal-to-noise ratio, the target is illuminated by intense, pulsed laser light and the light entering the camera(s) is band-pass filtered at the laser wavelength. Unfortunately, speckle caused by the coherence of the laser light engenders apparent movement in the projected dots, thereby giving rise to errors in the measurement of the centroids of the dots and corresponding errors in the computed shape and location of the surface of the target. The improved method is denoted laser-induced-fluorescence photogrammetry.

  9. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  10. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  11. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  12. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  13. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  14. Tomorrow

    NASA Image and Video Library

    2015-04-29

    This image from MESSENGER spacecraft covers a small area located about 115 km south of the center of Mansart crater. The smallest craters visible in the image are about the size of the 16-meter (52-feet) crater that will be made by the impact of the MESSENGER spacecraft. The impact will take place tomorrow, April 30, 2015. Just left of center is a crater that is about 80 meters in diameter. The bright area on its right wall may be an outcrop of hollows material. Date acquired: April 28, 2015 Image Mission Elapsed Time (MET): 72505530 Image ID: 8408666 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 69.8° N Center Longitude: 303.7° E Resolution: 2.0 meters/pixel Scale: The scene is about 1 km (0.6 miles) wide. This image has not been map projected. Incidence Angle: 79.0° Emission Angle: 11.0° Phase Angle: 90.0° http://photojournal.jpl.nasa.gov/catalog/PIA19442

  15. Sunlit Terraces

    NASA Image and Video Library

    2015-02-09

    The exterior of this unnamed crater is in shadow, while the inner wall and terraces bask in the sunshine. Terraces form just after the crater has been excavated, when oversteepened slopes slump back down. This image was acquired as part of the MDIS low-altitude imaging campaign. During MESSENGER's second extended mission, the spacecraft makes a progressively closer approach to Mercury's surface than at any previous point in the mission, enabling the acquisition of high-spatial-resolution data. For spacecraft altitudes below 350 kilometers, NAC images are acquired with pixel scales ranging from 20 meters to as little as 2 meters. Date acquired: January 23, 2015 Image Mission Elapsed Time (MET): 64352478 Image ID: 7849599 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 31.48° Center Longitude: 81.89° E Resolution: 6 meters/pixel Scale: This scene is approximately 6.3 km (3.9 miles) from top to bottom Incidence Angle: 82.6° Emission Angle: 0.1° Phase Angle: 82.7° http://photojournal.jpl.nasa.gov/catalog/PIA19196

  16. Integrated flexible handheld probe for imaging and evaluation of iridocorneal angle

    NASA Astrophysics Data System (ADS)

    Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin

    2015-01-01

    An imaging probe is designed and developed by integrating a miniaturized charge-coupled diode camera and light-emitting diode light source, which enables evaluation of the iridocorneal region inside the eye. The efficiency of the prototype probe instrument is illustrated initially by using not only eye models, but also samples such as pig eye. The proposed methodology and developed scheme are expected to find potential application in iridocorneal angle documentation, glaucoma diagnosis, and follow-up management procedures.

  17. Clinical trials of the prototype Rutherford Appleton Laboratory MWPC positron camera at the Royal Marsden Hospital

    NASA Astrophysics Data System (ADS)

    Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.

    1988-06-01

    Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.

  18. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  19. Preliminary results on photometric properties of materials at the Sagan Memorial Station, Mars

    USGS Publications Warehouse

    Johnson, J. R.; Kirk, R.; Soderblom, L.A.; Gaddis, L.; Reid, R.J.; Britt, D.T.; Smith, P.; Lemmon, M.; Thomas, N.; Bell, J.F.; Bridges, N.T.; Anderson, R.; Herkenhoff, K. E.; Maki, J.; Murchie, S.; Dummel, A.; Jaumann, R.; Trauthan, F.; Arnold, G.

    1999-01-01

    Reflectance measurements of selected rocks and soils over a wide range of illumination geometries obtained by the Imager for Mars Pathfinder (IMP) camera provide constraints on interpretations of the physical and mineralogical nature of geologic materials at the landing site. The data sets consist of (1) three small "photometric spot" subframed scenes, covering phase angles from 20?? to 150??; (2) two image strips composed of three subframed images each, located along the antisunrise and antisunset lines (photometric equator), covering phase angles from ???0?? to 155??; and (3) full-image scenes of the rock "Yogi," covering phase angles from 48?? to 100??. Phase functions extracted from calibrated data exhibit a dominantly backscattering photometric function, consistent with the results from the Viking lander cameras. However, forward scattering behavior does appear at phase angles >140??, particularly for the darker gray rock surfaces. Preliminary efforts using a Hapke scattering model are useful in comparing surface properties of different rock and soil types but are not well constrained, possibly due to the incomplete phase angle availability, uncertainties related to the photometric function of the calibration targets, and/or the competing effects of diffuse and direct lighting. Preliminary interpretations of the derived Hapke parameters suggest that (1) red rocks can be modeled as a mixture of gray rocks with a coating of bright and dark soil or dust, and (2) gray rocks have macroscopically smoother surfaces composed of microscopically homogeneous, clear materials with little internal scattering, which may imply a glass-like or varnished surface. Copyright 1999 by the American Geophysical Union.

  20. Mars Daily Global Image from April 1999

    NASA Image and Video Library

    2000-09-08

    Twelve orbits a day provide NASA Mars Global Surveyor MOC wide angle cameras a global napshot of weather patterns across the planet. Here, bluish-white water ice clouds hang above the Tharsis volcanoes.

  1. Neptune Great Dark Spot in High Resolution

    NASA Image and Video Library

    1999-08-30

    This photograph shows the last face on view of the Great Dark Spot that Voyager will make with the narrow angle camera. The image was shuttered 45 hours before closest approach at a distance of 2.8 million kilometers (1.7 million miles). The smallest structures that can be seen are of an order of 50 kilometers (31 miles). The image shows feathery white clouds that overlie the boundary of the dark and light blue regions. The pinwheel (spiral) structure of both the dark boundary and the white cirrus suggest a storm system rotating counterclockwise. Periodic small scale patterns in the white cloud, possibly waves, are short lived and do not persist from one Neptunian rotation to the next. This color composite was made from the clear and green filters of the narrow-angle camera. http://photojournal.jpl.nasa.gov/catalog/PIA00052

  2. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  3. Multi-layer Clouds Over the South Indian Ocean

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.

    The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.

    The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  4. Apollo 8 Mission image

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Moon, Latitude 15 degrees South,Longitude 170 degrees West. Camera Tilt Mode: High Oblique. Direction: Southeast. Sun Angle 17 degrees. Original Film Magazine was labeled E. Camera Data: 70mm Hasselblad; F-Stop: F-5.6; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Other Photographic Coverage: Lunar Orbiter 1 (LO I) S-3. Flight Date: December 21-27,1968.

  5. Mars Image Collection Mosaic Builder

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Hare, Trent

    2008-01-01

    A computer program assembles images from the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) collection to generate a uniform-high-resolution, georeferenced, uncontrolled mosaic image of the Martian surface. At the time of reporting the information for this article, the mosaic covered 7 percent of the Martian surface and contained data from more than 50,000 source images acquired under various light conditions at various resolutions.

  6. Imminent Approach to Dione

    NASA Image and Video Library

    2015-08-20

    This view from NASA Cassini spacecraft looks toward Saturn icy moon Dione, with giant Saturn and its rings in the background, just prior to the mission final close approach to the moon on August 17, 2015. At lower right is the large, multi-ringed impact basin named Evander, which is about 220 miles (350 kilometers) wide. The canyons of Padua Chasma, features that form part of Dione's bright, wispy terrain, reach into the darkness at left. Imaging scientists combined nine visible light (clear spectral filter) images to create this mosaic view: eight from the narrow-angle camera and one from the wide-angle camera, which fills in an area at lower left. The scene is an orthographic projection centered on terrain at 0.2 degrees north latitude, 179 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. North on Dione is up. The view was acquired at distances ranging from approximately 106,000 miles (170,000 kilometers) to 39,000 miles (63,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 35 degrees. Image scale is about 1,500 feet (450 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19650

  7. A LEGO Mindstorms Brewster angle microscope

    NASA Astrophysics Data System (ADS)

    Fernsler, Jonathan; Nguyen, Vincent; Wallum, Alison; Benz, Nicholas; Hamlin, Matthew; Pilgram, Jessica; Vanderpoel, Hunter; Lau, Ryan

    2017-09-01

    A Brewster Angle Microscope (BAM) built from a LEGO Mindstorms kit, additional LEGO bricks, and several standard optics components, is described. The BAM was built as part of an undergraduate senior project and was designed, calibrated, and used to image phospholipid, cholesterol, soap, and oil films on the surface of water. A BAM uses p-polarized laser light reflected off a surface at the Brewster angle, which ideally yields zero reflectivity. When a film of different refractive index is added to the surface a small amount of light is reflected, which can be imaged in a microscope camera. Films of only one molecule (approximately 1 nm) thick, a monolayer, can be observed easily in the BAM. The BAM was used in a junior-level Physical Chemistry class to observe phase transitions of a monolayer and the collapse of a monolayer deposited on the water surface in a Langmuir trough. Using a photometric calculation, students observed a change in thickness of a monolayer during a phase transition of 7 Å, which was accurate to within 1 Å of the value determined by more advanced methods. As supplementary material, we provide a detailed manual on how to build the BAM, software to control the BAM and camera, and image processing software.

  8. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  9. Full-parallax 3D display from stereo-hybrid 3D camera system

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  10. Solar System Portrait - Views of 6 Planets

    NASA Image and Video Library

    1996-09-13

    These six narrow-angle color images were made from the first ever portrait of the solar system taken by NASA’s Voyager 1, which was more than 4 billion miles from Earth and about 32 degrees above the ecliptic. The spacecraft acquired a total of 60 frames for a mosaic of the solar system which shows six of the planets. Mercury is too close to the sun to be seen. Mars was not detectable by the Voyager cameras due to scattered sunlight in the optics, and Pluto was not included in the mosaic because of its small size and distance from the sun. These blown-up images, left to right and top to bottom are Venus, Earth, Jupiter, and Saturn, Uranus, Neptune. The background features in the images are artifacts resulting from the magnification. The images were taken through three color filters -- violet, blue and green -- and recombined to produce the color images. Jupiter and Saturn were resolved by the camera but Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposure times. Earth appears to be in a band of light because it coincidentally lies right in the center of the scattered light rays resulting from taking the image so close to the sun. Earth was a crescent only 0.12 pixels in size. Venus was 0.11 pixel in diameter. The planetary images were taken with the narrow-angle camera (1500 mm focal length). http://photojournal.jpl.nasa.gov/catalog/PIA00453

  11. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Day the Earth Smiled: Sneak Preview

    NASA Image and Video Library

    2013-07-22

    In this rare image taken on July 19, 2013, the wide-angle camera on NASA's Cassini spacecraft has captured Saturn's rings and our planet Earth and its moon in the same frame. It is only one footprint in a mosaic of 33 footprints covering the entire Saturn ring system (including Saturn itself). At each footprint, images were taken in different spectral filters for a total of 323 images: some were taken for scientific purposes and some to produce a natural color mosaic. This is the only wide-angle footprint that has the Earth-moon system in it. The dark side of Saturn, its bright limb, the main rings, the F ring, and the G and E rings are clearly seen; the limb of Saturn and the F ring are overexposed. The "breaks" in the brightness of Saturn's limb are due to the shadows of the rings on the globe of Saturn, preventing sunlight from shining through the atmosphere in those regions. The E and G rings have been brightened for better visibility. Earth, which is 898 million miles (1.44 billion kilometers) away in this image, appears as a blue dot at center right; the moon can be seen as a fainter protrusion off its right side. An arrow indicates their location in the annotated version. (The two are clearly seen as separate objects in the accompanying composite image PIA14949.) The other bright dots nearby are stars. This is only the third time ever that Earth has been imaged from the outer solar system. The acquisition of this image, along with the accompanying composite narrow- and wide-angle image of Earth and the moon and the full mosaic from which both are taken, marked the first time that inhabitants of Earth knew in advance that their planet was being imaged. That opportunity allowed people around the world to join together in social events to celebrate the occasion. This view looks toward the unilluminated side of the rings from about 20 degrees below the ring plane. Images taken using red, green and blue spectral filters were combined to create this natural color view. The images were obtained with the Cassini spacecraft wide-angle camera on July 19, 2013 at a distance of approximately 753,000 miles (1.212 million kilometers) from Saturn, and approximately 898.414 million miles (1.445858 billion kilometers) from Earth. Image scale on Saturn is 43 miles (69 kilometers) per pixel; image scale on the Earth is 53,820 miles (86,620 kilometers) per pixel. The illuminated areas of neither Earth nor the Moon are resolved here. Consequently, the size of each "dot" is the same size that a point of light of comparable brightness would have in the wide-angle camera. http://photojournal.jpl.nasa.gov/catalog/PIA17171

  13. Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane

    NASA Astrophysics Data System (ADS)

    Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.

    2014-06-01

    In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.

  14. ARC-1986-A86-7041

    NASA Image and Video Library

    1986-01-24

    Range : 236,000 km. ( 147,000 mi. ) Resolution : 33 km. ( 20 mi. ) P-29525B/W This Voyager 2 image reveals a contiuos distribution of small particles throughout the Uranus ring system. This unigue geometry, the highest phase angle at which Voyager imaged the rings, allows us to see lanes of fine dust particles not visible from other viewing angles. All the previously known rings are visible. However, some of the brightest features in the image are bright dust lanes not previously seen. the combination of this unique geometry and a long, 96 second exposure allowed this spectacular observation, acquired through the clear filter if Voyager 2's wide angle camera. the long exposure produced a noticable, non-uniform smear, as well as streaks due to trailed stars.

  15. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  16. Substituting the polarizer mechanism with a polarization camera - an experiment to confirm its capability

    NASA Astrophysics Data System (ADS)

    Reginald, Nelson Leslie; Gopalswamy, Natchimuthuk; Guhathakurta, Madhulika; Yashiro, Seiji

    2016-05-01

    Experiments that require polarized brightness measurements, traditionally have done so by taking three successive images through a polarizer that is rotated through three well-defined angles. With the advent of the polarization camera, the polarized brightness can be measured from a single image. This also eliminates the need for a polarizer and the associated rotator mechanisms and can contribute towards less weight, size, less power requirements, and importantly higher temporal resolution. We intend to demonstrate the capabilities of the polarization camera by conducting a field experiment in conjunction with the total solar eclipse of 21 August 2017 using the Imaging Spectrograph of Coronal Electrons (ISCORE) instrument (Reginald et. al., solar physics, 2009, 260, 347-361). In this instrumental concept four K-coronal images of the corona through four filters centered at 385.0, 398.7, 410.0, 423.3 nm with a bandpass of 4 nm are expected to allow us to determine the coronal electron temperature and electron speed all around the corona. In order to determine the K-coronal brightness through each filter, we would have to take three images by rotating a polarizer through three angles for each of the filters, and it is not feasible owing to the short durations of total solar eclipses. Therefore, in the past we have assumed the total brightness (F + K) measured by each of the four filters to represent K-coronal brightness, which is true in low solar corona. However, with the advent of the polarization camera we can now measure the Stokes Polarization Parameters on a pixel by pixel basis for every image taken by the polarization camera. This allows us to independently quantify the total brightness (K+F) and polarized brightness (K). Also in addition to the four filter images that allow us to measure the electron temperature and electron speed, taking an additional image without a filter will give us enough information to determine the electron density. This instrumental concept was first tried in conjunction with the total solar eclipse of 9 March 2016 in Maba, Indonesia, but was unfortunately clouded out.

  17. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  18. Test Rover at JPL During Preparation for Mars Rover Low-Angle Selfie

    NASA Image and Video Library

    2015-08-19

    This view of a test rover at NASA's Jet Propulsion Laboratory, Pasadena, California, results from advance testing of arm positions and camera pointings for taking a low-angle self-portrait of NASA's Curiosity Mars rover. This rehearsal in California led to a dramatic Aug. 5, 2015, selfie of Curiosity, online at PIA19807. Curiosity's arm-mounted Mars Hand Lens Imager (MAHLI) camera took 92 of component images that were assembled into that mosaic. The rover team positioned the camera lower in relation to the rover body than for any previous full self-portrait of Curiosity. This practice version was taken at JPL's Mars Yard in July 2013, using the Vehicle System Test Bed (VSTB) rover, which has a test copy of MAHLI on its robotic arm. MAHLI was built by Malin Space Science Systems, San Diego. JPL, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19810

  19. Quality Assessment of 3d Reconstruction Using Fisheye and Perspective Sensors

    NASA Astrophysics Data System (ADS)

    Strecha, C.; Zoller, R.; Rutishauser, S.; Brot, B.; Schneider-Zapp, K.; Chovancova, V.; Krull, M.; Glassey, L.

    2015-03-01

    Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.

  20. Costless Platform for High Resolution Stereoscopic Images of a High Gothic Facade

    NASA Astrophysics Data System (ADS)

    Héno, R.; Chandelier, L.; Schelstraete, D.

    2012-07-01

    In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement) of the French ENSG (IGN's School of Geomatics, the Ecole Nationale des Sciences Géographiques) were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  1. MESSENGER Reveals Mercury in New Detail

    NASA Image and Video Library

    2008-01-16

    As NASA MESSENGER approached Mercury on January 14, 2008, the spacecraft Narrow-Angle Camera on the Mercury Dual Imaging System MDIS instrument captured this view of the planet rugged, cratered landscape illuminated obliquely by the Sun.

  2. Still from Red Spot Movie

    NASA Image and Video Library

    2000-11-21

    This image is one of seven from the narrow-angle camera on NASA Cassini spacecraft assembled as a brief movie of cloud movements on Jupiter. The smallest features visible are about 500 kilometers about 300 miles across.

  3. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  4. The MESSENGER Earth Flyby: Results from the Mercury Dual Imaging System

    NASA Astrophysics Data System (ADS)

    Prockter, L. M.; Murchie, S. L.; Hawkins, S. E.; Robinson, M. S.; Shelton, R. G.; Vaughan, R. M.; Solomon, S. C.

    2005-12-01

    The MESSENGER (MErcury Surface, Space ENvironment, Geochemistry, and Ranging) spacecraft was launched from Cape Canaveral Air Force Station, Fla., on 3 August 2004. It returned to Earth for a gravity assist on 2 August 2005, providing an exceptional opportunity for the Science Team to perform instrument calibrations and to test some of the data acquisition sequences that will be used to meet Mercury science goals. The Mercury Dual Imaging System (MDIS), one of seven science instruments on MESSENGER, consists of a wide-angle and a narrow-angle imager that together can map landforms, track variations in surface color, and carry out stereogrammetry. The two imagers are mounted on a pivot platform that enables the instrument to point in a different direction from the spacecraft boresight, allowing great flexibility and increased imaging coverage. During the week prior to the closest approach to Earth, MDIS acquired a number of images of the Moon for radiometric calibration and to test optical navigation sequences that will be used to target planetary flybys. Twenty-four hours before closest approach, images of the Earth were acquired with 11 filters of the wide-angle camera. After MDIS flew over the nightside of the Earth, additional color images centered on South America were obtained at sufficiently high resolution to discriminate small-scale features such as the Amazon River and Lake Titicaca. During its departure from Earth, MDIS acquired a sequence of images taken in three filters every 4 minutes over a period of 24 hours. These images have been assembled into a movie of a crescent Earth that begins as South America slides across the terminator into darkness and continues for one full Earth rotation. This movie and the other images have provided a successful test of the sequences that will be used during the MESSENGER Mercury flybys in 2008 and 2009 and have demonstrated the high quality of the MDIS wide-angle camera.

  5. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  6. MISR Instrument Data Visualization

    NASA Technical Reports Server (NTRS)

    Nelson, David; Garay, Michael; Diner, David; Thompson, Charles; Hall, Jeffrey; Rheingans, Brian; Mazzoni, Dominic

    2008-01-01

    The MISR Interactive eXplorer (MINX) software functions both as a general-purpose tool to visualize Multiangle Imaging SpectroRadiometer (MISR) instrument data, and as a specialized tool to analyze properties of smoke, dust, and volcanic plumes. It includes high-level options to create map views of MISR orbit locations; scrollable, single-camera RGB (red-greenblue) images of MISR level 1B2 (L1B2) radiance data; and animations of the nine MISR camera images that provide a 3D perspective of the scenes that MISR has acquired. NASA Tech Briefs, September 2008 55 The plume height capability provides an accurate estimate of the injection height of plumes that is needed by air quality and climate modelers. MISR provides global high-quality stereo height information, and this program uses that information to perform detailed height retrievals of aerosol plumes. Users can interactively digitize smoke, dust, or volcanic plumes and automatically retrieve heights and winds, and can also archive MISR albedos and aerosol properties, as well as fire power and brightness temperatures associated with smoke plumes derived from Moderate Resolution Imaging Spectroradiometer (MODIS) data. Some of the specialized options in MINX enable the user to do other tasks. Users can display plots of top-of-atmosphere bidirectional reflectance factors (BRFs) versus camera-angle for selected pixels. Images and animations can be saved to disk in various formats. Also, users can apply a geometric registration correction to warp camera images when the standard processing correction is inadequate. It is possible to difference the images of two MISR orbits that share a path (identical ground track), as well as to construct pseudo-color images by assigning different combinations of MISR channels (angle or spectral band) to the RGB display channels. This software is an interactive application written in IDL and compiled into an IDL Virtual Machine (VM) ".sav" file.

  7. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  8. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  9. Faint F Ring and Prometheus

    NASA Image and Video Library

    2016-11-21

    Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508

  10. Aspects of Voyager photogrammetry

    NASA Technical Reports Server (NTRS)

    Wu, Sherman S. C.; Schafer, Francis J.; Jordan, Raymond; Howington, Annie-Elpis

    1987-01-01

    In January 1986, Voyager 2 took a series of pictures of Uranus and its satellites with the Imaging Science System (ISS) on board the spacecraft. Based on six stereo images from the ISS narrow-angle camera, a topographic map was compiled of the Southern Hemisphere of Miranda, one of Uranus' moons. Assuming a spherical figure, a 20-km surface relief is shown on the map. With three additional images from the ISS wide-angle camera, a control network of Miranda's Southern Hemisphere was established by analytical photogrammetry, producing 88 ground points for the control of multiple-model compilation on the AS-11AM analytical stereoplotter. Digital terrain data from the topographic map of Miranda have also been produced. By combining these data and the image data from the Voyager 2 mission, perspective views or even a movie of the mapped area can be made. The application of these newly developed techniques to Voyager 1 imagery, which includes a few overlapping pictures of Io and Ganymede, permits the compilation of contour maps or topographic profiles of these bodies on the analytical stereoplotters.

  11. The high resolution stereo camera (HRSC): acquisition of multi-spectral 3D-data and photogrammetric processing

    NASA Astrophysics Data System (ADS)

    Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus

    2017-11-01

    At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.

  12. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  13. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  14. MISR Views Northern Australia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    MISR images of tropical northern Australia acquired on June 1, 2000 (Terra orbit 2413) during the long dry season. Left: color composite of vertical (nadir) camera blue, green, and red band data. Right: multi-angle composite of red band data only from the cameras viewing 60 degrees aft, 60 degrees forward, and nadir. Color and contrast have been enhanced to accentuate subtle details. In the left image, color variations indicate how different parts of the scene reflect light differently at blue, green, and red wavelengths; in the right image color variations show how these same scene elements reflect light differently at different angles of view. Water appears in blue shades in the right image, for example, because glitter makes the water look brighter at the aft camera's view angle. The prominent inland water body is Lake Argyle, the largest human-made lake in Australia, which supplies water for the Ord River Irrigation Area and the town of Kununurra (pop. 6500) just to the north. At the top is the southern edge of Joseph Bonaparte Gulf; the major inlet at the left is Cambridge Gulf, the location of the town of Wyndham (pop. 850), the port for this region. This area is sparsely populated, and is known for its remote, spectacular mountains and gorges. Visible along much of the coastline are intertidal mudflats of mangroves and low shrubs; to the south the terrain is covered by open woodland merging into open grassland in the lower half of the pictures.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  15. Target Acquisition for Projectile Vision-Based Navigation

    DTIC Science & Technology

    2014-03-01

    Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the

  16. Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon

    NASA Astrophysics Data System (ADS)

    Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.

    2009-12-01

    The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].

  17. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  18. Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows

    NASA Astrophysics Data System (ADS)

    Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.

    2016-10-01

    A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.

  19. Ridges and Cliffs on Mercury Surface

    NASA Image and Video Library

    2008-01-20

    A complex history of geological evolution is recorded in this frame from the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS instrument, taken during NASA MESSENGER close flyby of Mercury on January 14, 2008.

  20. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  1. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  2. The Activity of Comet 67P/Churyumov-Gerasimenko as Seen by Rosetta/OSIRIS

    NASA Astrophysics Data System (ADS)

    Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Rickman, H.; Koschny, D.

    2015-12-01

    The Rosetta mission of the European Space Agency arrived on August 6, 2014, at the target comet 67P/Churyumov-Gerasimenko. OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) is the scientific imaging system onboard Rosetta. OSIRIS consists of a Narrow Angle Camera (NAC) for the nucleus surface and dust studies and a Wide Angle Camera (WAC) for the wide field gas and dust coma investigations. OSIRIS observed the coma and the nucleus of comet 67P/C-G during approach, arrival, and landing of PHILAE. OSIRIS continued comet monitoring and mapping of surface and activity in 2015 with close fly-bys with high resolution and remote, wide angle observations. The scientific results reveal a nucleus with two lobes and varied morphology. Active regions are located at steep cliffs and collapsed pits which form collimated gas jets. Dust is accelerated by the gas, forming bright jet filaments and the large scale, diffuse coma of the comet. We will present activity and surface changes observed in the Northern and Southern hemisphere and around perihelion passage.

  3. Junocam: Juno's Outreach Camera

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  4. Compact Kirkpatrick–Baez microscope mirrors for imaging laser-plasma x-ray emission

    DOE PAGES

    Marshall, F. J.

    2012-07-18

    Compact Kirkpatrick–Baez microscope mirror components for use in imaging laser-plasma x-ray emission have been manufactured, coated, and tested. A single mirror pair has dimensions of 14 × 7 × 9 mm and a best resolution of ~5 μm. The mirrors are coated with Ir providing a useful energy range of 2-8 keV when operated at a grazing angle of 0.7°. The mirrors can be circularly arranged to provide 16 images of the target emission a configuration best suited for use in combination with a custom framing camera. As a result, an alternative arrangement of the mirrors would allow alignment ofmore » the images with a fourstrip framing camera.« less

  5. Two Titans

    NASA Image and Video Library

    2017-08-11

    These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624

  6. Boundary Layer Transition Detection on a Rotor Blade Using Rotating Mirror Thermography

    NASA Technical Reports Server (NTRS)

    Heineck, James T.; Schuelein, Erich; Raffel, Markus

    2014-01-01

    Laminar-to-turbulent transition on a rotor blade in hover has been imaged using an area-scan infrared camera. A new method for tracking a blade using a rotating mirror was employed. The mirror axis of rotation roughly corresponded to the rotor axis of rotation and the mirror rotational frequency is 1/2 that of the rotor. This permitted the use of cameras whose integration time was too long to prevent image blur due to the motion of the blade. This article will show the use of this method for a rotor blade at different collective pitch angles.

  7. Image dissector camera system study

    NASA Technical Reports Server (NTRS)

    Howell, L.

    1984-01-01

    Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.

  8. Formations in Context (or, what is it?)

    NASA Image and Video Library

    2018-04-02

    This image from NASA's Mars Reconnaissance Orbiter is a close-up of a trough, along with channels draining into the depression. Some HiRISE images show strange-looking formations. Sometimes it helps to look at Context Camera images to understand the circumstances of a scene -- like this cutout from CTX 033783_1509 -- which here shows an impact crater with a central peak, and a collapse depression with concentric troughs just north of that peak. On the floor of the trough is some grooved material that we typically see in middle latitude regions where there has been glacial flow. These depressions with concentric troughs exist elsewhere on Mars, and their origins remain a matter of debate. NB: The Context Camera is another instrument onboard MRO, and it has a larger viewing angle than HiRISE, but less resolution capability than our camera. https://photojournal.jpl.nasa.gov/catalog/PIA22348

  9. Visual Tour Based on Panaromic Images for Indoor Places in Campus

    NASA Astrophysics Data System (ADS)

    Bakirman, T.

    2012-07-01

    In this paper, it is aimed to create a visual tour based on panoramic images for Civil Engineering Faculty in Yildiz Technical University. For this purpose, panoramic images should be obtained. Thus, photos taken with a tripod to have the same angle of view in every photo and panoramic images were created with stitching photos. Two different cameras with different focal length were used. With the panoramic images, visual tour with navigation tools created.

  10. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  11. Systems and Methods for Imaging of Falling Objects

    NASA Technical Reports Server (NTRS)

    Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)

    2014-01-01

    Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.

  12. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  13. Have a Nice Spring! MOC Revisits "Happy Face" Crater

    NASA Image and Video Library

    2005-05-16

    Smile! Spring has sprung in the martian southern hemisphere. With it comes the annual retreat of the winter polar frost cap. This view of "Happy Face Crater"--officially named "Galle Crater"--shows patches of white water ice frost in and around the crater's south-facing slopes. Slopes that face south will retain frost longer than north-facing slopes because they do not receive as much sunlight in early spring. This picture is a composite of images taken by the Mars Global Surveyor Mars Orbiter Camera (MOC) red and blue wide angle cameras. The wide angle cameras were designed to monitor the changing weather, frost, and wind patterns on Mars. Galle Crater is located on the east rim of the Argyre Basin and is about 215 kilometers (134 miles) across. In this picture, illumination is from the upper left and north is up. http://photojournal.jpl.nasa.gov/catalog/PIA02325

  14. Narrow Angle movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief three-frame movie of the Moon was made from three Cassini narrow-angle images as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The purpose of this particular set of images was to calibrate the spectral response of the narrow-angle camera and to test its 'on-chip summing mode' data compression technique in flight. From left to right, they show the Moon in the green, blue and ultraviolet regions of the spectrum in 40, 60 and 80 millisecond exposures, respectively. All three images have been scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is the same in each image. The spatial scale in the blue and ultraviolet images is 1.4 miles per pixel (2.3 kilometers). The original scale in the green image (which was captured in the usual manner and then reduced in size by 2x2 pixel summing within the camera system) was 2.8 miles per pixel (4.6 kilometers). It has been enlarged for display to the same scale as the other two. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  15. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  16. WindCam and MSPI: two cloud and aerosol instrument concepts derived from Terra/MISR heritage

    NASA Astrophysics Data System (ADS)

    Diner, David J.; Mischna, Michael; Chipman, Russell A.; Davis, Ab; Cairns, Brian; Davies, Roger; Kahn, Ralph A.; Muller, Jan-Peter; Torres, Omar

    2008-08-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has been acquiring global cloud and aerosol data from polar orbit since February 2000. MISR acquires moderately high-resolution imagery at nine view angles from nadir to 70.5°, in four visible/near-infrared spectral bands. Stereoscopic parallax, time lapse among the nine views, and the variation of radiance with angle and wavelength enable retrieval of geometric cloud and aerosol plume heights, height-resolved cloud-tracked winds, and aerosol optical depth and particle property information. Two instrument concepts based upon MISR heritage are in development. The Cloud Motion Vector Camera, or WindCam, is a simplified version comprised of a lightweight, compact, wide-angle camera to acquire multiangle stereo imagery at a single visible wavelength. A constellation of three WindCam instruments in polar Earth orbit would obtain height-resolved cloud-motion winds with daily global coverage, making it a low-cost complement to a spaceborne lidar wind measurement system. The Multiangle SpectroPolarimetric Imager (MSPI) is aimed at aerosol and cloud microphysical properties, and is a candidate for the National Research Council Decadal Survey's Aerosol-Cloud-Ecosystem (ACE) mission. MSPI combines the capabilities of MISR with those of other aerosol sensors, extending the spectral coverage to the ultraviolet and shortwave infrared and incorporating high-accuracy polarimetric imaging. Based on requirements for the nonimaging Aerosol Polarimeter Sensor on NASA's Glory mission, a degree of linear polarization uncertainty of 0.5% is specified within a subset of the MSPI bands. We are developing a polarization imaging approach using photoelastic modulators (PEMs) to accomplish this objective.

  17. Southern Florida's River of Grass

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Florida's Everglades is a region of broad, slow-moving sheets of water flowing southward over low-lying areas from Lake Okeechobeeto the Gulf of Mexico. In places this remarkable 'river of grass' is 80 kilometers wide. These images from the Multi-angle Imaging SpectroRadiometer show the Everglades region on January 16, 2002. Each image covers an area measuring 191 kilometers x 205 kilometers. The data were captured during Terra orbit 11072.

    On the left is a natural color view acquired by MISR's nadir camera. A portion of Lake Okeechobee is visible at the top, to the right of image center. South of the lake, whose name derives from the Seminole word for 'big water,' an extensive region of farmland known as the Everglades Agricultural Area is recognizable by its many clustered squares. Over half of the sugar produced in United States is grown here. Urban areas along the east coast and in the northern part of the image extend to the boundaries of Big Cypress Swamp, situated north of Everglades National Park.

    The image on the right combines red-band data from the 46-degree backward, nadir and 46-degree forward-viewing camera angles to create a red, green, blue false-color composite. One of the interesting uses of the composite image is for detecting surface water. Wet surfaces appear blue in this rendition because sun glitter produces a greater signal at the forward camera's view angle. Wetlands visible in these images include a series of shallow impoundments called Water Conservation Areas which were built to speed water flow through the Everglades in times of drought. In parts of the Everglades, these levees and extensive systems such as the Miami and Tamiami Canals have altered the natural cycles of water flow. For example, the water volume of the Shark River Slough, a natural wetland which feeds Everglades National Park, is influenced by the Tamiami Canal. The unique and intrinsic value of the Everglades is now widely recognized, and efforts to restore the natural water cycles are underway.

  18. Volga Delta and the Caspian Sea

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Russia's Volga River is the largest river system in Europe, draining over 1.3 million square kilometers of catchment area into the Caspian Sea. The brackish Caspian is Earth's largest landlocked water body, and its isolation from the world's oceans has enabled the preservation of several unique animal and plant species. The Volga provides most of the Caspian's fresh water and nutrients, and also discharges large amounts of sediment and industrial waste into the relatively shallow northern part of the sea. These images of the region were captured by the Multi-angle Imaging SpectroRadiometer on October 5, 2001, during Terra orbit 9567. Each image represents an area of approximately 275 kilometers x 376 kilometers.

    The left-hand image is from MISR's nadir (vertical-viewing) camera, and shows how light is reflected at red, green, and blue wavelengths. The right-hand image is a false color composite of red-band imagery from MISR's 60-degree backward, nadir, and 60-degree forward-viewing cameras, displayed as red, green, and blue, respectively. Here, color variations indicate how light is reflected at different angles of view. Water appears blue in the right-hand image, for example, because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. The rougher-textured vegetated wetlands near the coast exhibit preferential backscattering, and consequently appear reddish. A small cloud near the center of the delta separates into red, green, and blue components due to geometric parallax associated with its elevation above the surface.

    Other notable features within the images include several linear features located near the Volga Delta shoreline. These long, thin lines are artificially maintained shipping channels, dredged to depths of at least 2 meters. The crescent-shaped Kulaly Island, also known as Seal Island, is visible near the right-hand edge of the images.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  19. Airburst height computation method of Sea-Impact Test

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho

    2017-05-01

    This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.

  20. Agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula in the Reykjavik eye study.

    PubMed

    Csutak, A; Lengyel, I; Jonasson, F; Leung, I; Geirsdottir, A; Xing, W; Peto, T

    2010-10-01

    To establish the agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula. In 2008, the 12-year follow-up was conducted on 573 participants of the Reykjavik Eye Study. This study included the use of the Optos P200C AF ultra wide-angle laser scanning ophthalmoscope alongside Zeiss FF 450 conventional digital fundus camera on 121 eyes with or without age-related macular degeneration using the International Classification System. Of these eyes, detailed grading was carried out on five cases each with hard drusen, geographic atrophy and chorioretinal neovascularisation, and six cases of soft drusen. Exact agreement and κ-statistics were calculated. Comparison of the conventional and ultra wide-angle images in the macula showed an overall 96.43% agreement (κ=0.93) with no disagreement at end-stage disease; although in one eye chorioretinal neovascularisation was graded as drusenoid pigment epithelial detachment. Of patients with drusen only, the exact agreement was 96.1%. The detailed grading showed no clinically significant disagreement between the conventional 45° and 200° images. On the basis of our results, there is a good agreement between grading conventional and ultra wide-angle images in the macula.

  1. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  2. Geometric approach to the design of an imaging probe to evaluate the iridocorneal angle structures

    NASA Astrophysics Data System (ADS)

    Hong, Xun Jie Jeesmond; V. K., Shinoj; Murukeshan, V. M.; Baskaran, M.; Aung, Tin

    2017-06-01

    Photographic imaging methods allow the tracking of anatomical changes in the iridocorneal angle structures and the monitoring of treatment responses overtime. In this work, we aim to design an imaging probe to evaluate the iridocorneal angle structures using geometrical optics. We first perform an analytical analysis on light propagation from the anterior chamber of the eye to the exterior medium using Snell's law. This is followed by adopting a strategy to achieve uniform near field irradiance, by simplifying the complex non-rotational symmetric irradiance distribution of LEDs tilted at an angle. The optimization is based on the geometric design considerations of an angled circular ring array of 4 LEDs (or a 2 × 2 square LED array). The design equation give insights on variable parameters such as the illumination angle of the LEDs, ring array radius, viewing angle of the LEDs, and the working distance. A micro color CCD video camera that has sufficient resolution to resolve the iridocorneal angle structures at the required working distance is then chosen. The proposed design aspects fulfil the safety requirements recommended by the International Commission on Non-ionizing Radiation Protection.

  3. Camera characterization for all-sky polarization measurements during the 2017 solar eclipse

    NASA Astrophysics Data System (ADS)

    Hashimoto, Taiga; Dahl, Laura M.; Laurie, Seth A.; Shaw, Joseph A.

    2017-08-01

    A solar eclipse provides a rare opportunity to observe skylight polarization during conditions that are fundamentally different than what we see every day. On 21 August 2017 we will measure the skylight polarization during a total solar eclipse in Rexburg, Idaho, USA. Previous research has shown that during totality the sky polarization pattern is altered significantly to become nominally symmetric about the zenith. However, there are still questions remaining about the details of how surface reflectance near the eclipse observation site and optical properties of aerosols in the atmosphere influence the totality sky polarization pattern. We will study how skylight polarization in a solar eclipse changes through each phase and how surface and atmospheric features affect the measured polarization signatures. To accomplish this, fully characterizing the cameras and fisheye lenses is critical. This paper reports measurements that include finding the camera sensitivity and its relationship to the required short exposure times, measuring the camera's spectral response function, mapping the angles of each camera pixel with the fisheye lens, and taking test measurements during daytime and twilight conditions. The daytime polarimetric images were compared to images from an existing all-sky polarization imager and a polarimetric radiative transfer model.

  4. A remote camera operation system using a marker attached cap

    NASA Astrophysics Data System (ADS)

    Kawai, Hironori; Hama, Hiromitsu

    2005-12-01

    In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.

  5. A multi-modal stereo microscope based on a spatial light modulator.

    PubMed

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  6. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    NASA Astrophysics Data System (ADS)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  7. NASA MISR Studies Smoke Plumes from California Sand Fire

    NASA Image and Video Library

    2016-08-02

    39,000 acres (60 square miles, or 160 square kilometers). Thousands of residents were evacuated, and the fire claimed the life of one person. The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite passed over the region on July 23 around 11:50 a.m. PDT. At left is an image acquired by MISR's 60-degree forward-viewing camera. The oblique view angle makes the smoke more apparent than it would be in a more conventional vertical view. This cropped image is about 185 miles (300 kilometers) wide. Smoke from the Sand Fire is visible on the right-hand side of the image. Stereoscopic analysis of MISR's multiple camera angles is used to compute the height of the smoke plume from the Sand Fire. In the right-hand image, these heights are superimposed on the underlying image. The color scale shows that the plume extends up to about 4 miles (6 kilometers) above its source in Santa Clarita, but rapidly diminishes in height as winds push it to the southwest. The data compare well with a pilot report issued at Los Angeles International Airport on the evening of July 22, which reported smoke at 15,000-18,000 feet altitude (4.5 to 5.5 kilometers). Air quality warnings were issued for the San Fernando Valley and the western portion of Los Angeles due to this low-hanging smoke. However, data from air quality monitoring instruments seem to indicate that the smoke did not actually reach the ground. These data were captured during Terra orbit 88284. http://photojournal.jpl.nasa.gov/catalog/PIA20724

  8. Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT

    NASA Astrophysics Data System (ADS)

    Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.

    2009-01-01

    We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.

  9. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  10. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  11. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  12. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  13. Techniques for Surface-Temperature Measurements and Transition Detection on Projectiles at Hypersonic Velocities--Status Report No. 2

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Wilder, M. C.

    2006-01-01

    The latest developments in a research effort to advance techniques for measuring surface temperatures and heat fluxes and determining transition locations on projectiles in hypersonic free flight in a ballistic range are described. Spherical and hemispherical titanium projectiles were launched at muzzle velocities of 4.6-5.8 km/sec into air and nitrogen at pressures of 95-380 Torr. Hemisphere models with diameters of 2.22 cm had maximum pitch and yaw angles of 5.5-8 degrees and 4.7-7 degrees, depending on whether they were launched using an evacuated launch tube or not. Hemisphere models with diameters of 2.86 cm had maximum pitch and yaw angles of 2.0-2.5 degrees. Three intensified-charge-coupled-device (ICCD) cameras with wavelength sensitivity ranges of 480-870 nm (as well as one infrared camera with a wavelength sensitivity range of 3 to 5 microns), were used to obtain images of the projectiles in flight. Helium plumes were used to remove the radiating gas cap around the projectiles at the locations where ICCD camera images were taken. ICCD and infrared (IR) camera images of titanium hemisphere projectiles at velocities of 4.0-4.4 km/sec are presented as well as preliminary temperature data for these projectiles. Comparisons were made of normalized temperature data for shots at approx.190 Torr in air and nitrogen and with and without the launch tube evacuated. Shots into nitrogen had temperatures 6% lower than those into air. Evacuation of the launch tube was also found to lower the projectile temperatures by approx.6%.

  14. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  15. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  16. Rover imaging system for the Mars rover/sample return mission

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In the past year, the conceptual design of a panoramic imager for the Mars Environmental Survey (MESUR) Pathfinder was finished. A prototype camera was built and its performace in the laboratory was tested. The performance of this camera was excellent. Based on this work, we have recently proposed a small, lightweight, rugged, and highly capable Mars Surface Imager (MSI) instrument for the MESUR Pathfinder mission. A key aspect of our approach to optimization of the MSI design is that we treat image gathering, coding, and restoration as a whole, rather than as separate and independent tasks. Our approach leads to higher image quality, especially in the representation of fine detail with good contrast and clarity, without increasing either the complexity of the camera or the amount of data transmission. We have made significant progress over the past year in both the overall MSI system design and in the detailed design of the MSI optics. We have taken a simple panoramic camera and have upgraded it substantially to become a prototype of the MSI flight instrument. The most recent version of the camera utilizes miniature wide-angle optics that image directly onto a 3-color, 2096-element CCD line array. There are several data-taking modes, providing resolution as high as 0.3 mrad/pixel. Analysis tasks that were performed or that are underway with the test data from the prototype camera include the following: construction of 3-D models of imaged scenes from stereo data, first for controlled scenes and later for field scenes; and checks on geometric fidelity, including alignment errors, mast vibration, and oscillation in the drive system. We have outlined a number of tasks planned for Fiscal Year '93 in order to prepare us for submission of a flight instrument proposal for MESUR Pathfinder.

  17. Photometric Lambert Correction for Global Mosaicking of HRSC Data

    NASA Astrophysics Data System (ADS)

    Walter, Sebastian; Michael, Greg; van Gasselt, Stephan; Kneissl, Thomas

    2015-04-01

    The High Resolution Stereo Camera (HRSC) is a push-broom image sensor onboard Mars Express recording the Martian surface in 3D and color. Being in orbit since 2004, the camera has obtained over 3,600 panchromatic image sequences covering about 70% of the planet's surface at 10-20 m/pixel. The composition of an homogenous global mosaic is a major challenge due to the strong elliptical and highly irregular orbit of the spacecraft, which often results in large variations of illumination and atmospheric conditions between individual images. For the purpose of a global mosaic in the full Nadir resolution of 12.5 m per pixel we present a first-order systematic photometric correction for the individual image sequences based on a Lambertian reflection model. During the radiometric calibration of the HRSC data, values for the reflectance scaling factor and the reflectance offset are added to the individual image labels. These parameters can be used for a linear transformation from the original DN values into spectral reflectance values. The spectral reflectance varies with the solar incidence angle, topography (changing the local incidence angle and therefore adding an exta geometry factor for each ground pixel), the bi-directional reflectance distribution function (BRDF) of the surface, and atmospheric effects. Mosaicking the spectral values together as images sometimes shows large brightness differences. One major contributor to the brightness differences between two images is the differing solar geometry due to the varying time of day when the individual images were obtained. This variation causes two images of the same or adjacent areas to have different image brightnesses. As a first-order correction for the varying illumination conditions and resulting brightness variations, the images are corrected for the solar incidence angle by assuming an ideal diffusely reflecting behaviour of the surface. This correction requires the calculation of the solar geometry for each image pixel by an image-to-ground function. For the calculations we are using the VICAR framework and the SPICE library. Under the Lambertian assumption, the reflectance diminishment resulting from an inclined Sun angle can be corrected by dividing the measured reflectance by the cosine of the illumination angle. After rectification of the corrected images, the individual images are mosaicked together. The overall visual impression shows a much better integration of the individual image sequences. The correction resolves the direct correlation between the reflectance and the incidence angles from the data. It does not account for topographic, atmospheric or BRDF influences to the measurements. Since the main purpose of the global HRSC image mosaic is the application for geomorphologic studies with a good visual impression of the albedo variations and the topography, the remaining distortions at the image seams can be equalized by non-reversible image matching techniques.

  18. Plenoptic particle image velocimetry with multiple plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2018-07-01

    Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

  19. Neptune Through a Clear Filter

    NASA Image and Video Library

    1999-07-25

    On July 23, 1989, NASA Voyager 2 spacecraft took this picture of Neptune through a clear filter on its narrow-angle camera. The image on the right has a latitude and longitude grid added for reference. Neptune Great Dark Spot is visible on the left.

  20. Atmospheric aerosol profiling with a bistatic imaging lidar system.

    PubMed

    Barnes, John E; Sharma, N C Parikh; Kaplan, Trevor B

    2007-05-20

    Atmospheric aerosols have been profiled using a simple, imaging, bistatic lidar system. A vertical laser beam is imaged onto a charge-coupled-device camera from the ground to the zenith with a wide-angle lens (CLidar). The altitudes are derived geometrically from the position of the camera and laser with submeter resolution near the ground. The system requires no overlap correction needed in monostatic lidar systems and needs a much smaller dynamic range. Nighttime measurements of both molecular and aerosol scattering were made at Mauna Loa Observatory. The CLidar aerosol total scatter compares very well with a nephelometer measuring at 10 m above the ground. The results build on earlier work that compared purely molecular scattered light to theory, and detail instrument improvements.

  1. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    PubMed

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, A. S., E-mail: alastair.moore@physics.org; Ahmed, M. F.; Soufli, R.

    A dual-channel streaked soft x-ray imager has been designed and used on high energy-density physics experiments at the National Ignition Facility. This streaked imager creates two images of the same x-ray source using two slit apertures and a single shallow angle reflection from a nickel mirror. Thin filters are used to create narrow band pass images at 510 eV and 360 eV. When measuring a Planckian spectrum, the brightness ratio of the two images can be translated into a color-temperature, provided that the spectral sensitivity of the two images is well known. To reduce uncertainty and remove spectral features inmore » the streak camera photocathode from this photon energy range, a thin 100 nm CsI on 50 nm Al streak camera photocathode was implemented. Provided that the spectral shape is well-known, then uncertainties on the spectral sensitivity limits the accuracy of the temperature measurement to approximately 4.5% at 100 eV.« less

  3. Challenges and solutions for high performance SWIR lens design

    NASA Astrophysics Data System (ADS)

    Gardner, M. C.; Rogers, P. J.; Wilde, M. F.; Cook, T.; Shipton, A.

    2016-10-01

    Shortwave infrared (SWIR) cameras are becoming increasingly attractive due to the improving size, resolution and decreasing prices of InGaAs focal plane arrays (FPAs). The rapid development of competitively priced HD performance SWIR cameras has not been matched in SWIR imaging lenses with the result that the lens is now more likely to be the limiting factor in imaging quality than the FPA. Adapting existing lens designs from the visible region by re-coating for SWIR will improve total transmission but diminished image quality metrics such as MTF, and in particular large field angle performance such as vignetting, field curvature and distortion are serious consequences. To meet this challenge original SWIR solutions are presented including a wide field of view fixed focal length lens for commercial machine vision (CMV) and a wide angle, small, lightweight defence lens and their relevant design considerations discussed. Issues restricting suitable glass types will be examined. The index and dispersion properties at SWIR wavelengths can differ significantly from their visible values resulting in unusual glass combinations when matching doublet elements. Materials chosen simultaneously allow athermalization of the design as well as containing matched CTEs in the elements of doublets. Recently, thinned backside-illuminated InGaAs devices have made Vis.SWIR cameras viable. The SWIR band is sufficiently close to the visible that the same constituent materials can be used for AR coatings covering both bands. Keeping the lens short and mass low can easily result in high incidence angles which in turn complicates coating design, especially when extended beyond SWIR into the visible band. This paper also explores the potential performance of wideband Vis.SWIR AR coatings.

  4. Digital cartography of Io

    NASA Technical Reports Server (NTRS)

    Mcewen, Alfred S.; Duck, B.; Edwards, Kathleen

    1991-01-01

    A high resolution controlled mosaic of the hemisphere of Io centered on longitude 310 degrees is produced. Digital cartographic techniques were employed. Approximately 80 Voyager 1 clear and blue filter frames were utilized. This mosaic was merged with low-resolution color images. This dataset is compared to the geologic map of this region. Passage of the Voyager spacecraft through the Io plasma torus during acquisition of the highest resolution images exposed the vidicon detectors to ionized radiation, resulting in dark-current buildup on the vidicon. Because the vidicon is scanned from top to bottom, more charge accumulated toward the bottom of the frames, and the additive error increases from top to bottom as a ramp function. This ramp function was removed by using a model. Photometric normalizations were applied using the Minnaert function. An attempt to use Hapke's photometric function revealed that this function does not adequately describe Io's limb darkening at emission angles greater than 80 degrees. In contrast, the Minnaert function accurately describes the limb darkening up to emission angles of about 89 degrees. The improved set of discrete camera angles derived from this effort will be used in conjunction with the space telemetry pointing history file (the IPPS file), corrected on 4 or 12 second intervals to derive a revised time history for the pointing of the Infrared Interferometric Spectrometer (IRIS). For IRIS observations acquired between camera shutterings, the IPPS file can be corrected by linear interpolation, provided that the spacecraft motions were continuous. Image areas corresponding to the fields of view of IRIS spectra acquired between camera shutterings will be extracted from the mosaic to place the IRIS observations and hotspot models into geologic context.

  5. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  6. Wide-field fundus imaging with trans-palpebral illumination.

    PubMed

    Toslak, Devrim; Thapa, Damber; Chen, Yanjun; Erol, Muhammet Kazim; Paul Chan, R V; Yao, Xincheng

    2017-01-28

    In conventional fundus imaging devices, transpupillary illumination is used for illuminating the inside of the eye. In this method, the illumination light is directed into the posterior segment of the eye through the cornea and passes the pupillary area. As a result of sharing the pupillary area for the illumination beam and observation path, pupil dilation is typically necessary for wide-angle fundus examination, and the field of view is inherently limited. An alternative approach is to deliver light from the sclera. It is possible to image a wider retinal area with transcleral-illumination. However, the requirement of physical contact between the illumination probe and the sclera is a drawback of this method. We report here trans-palpebral illumination as a new method to deliver the light through the upper eyelid (palpebra). For this study, we used a 1.5 mm diameter fiber with a warm white LED light source. To illuminate the inside of the eye, the fiber illuminator was placed at the location corresponding to the pars plana region. A custom designed optical system was attached to a digital camera for retinal imaging. The optical system contained a 90 diopter ophthalmic lens and a 25 diopter relay lens. The ophthalmic lens collected light coming from the posterior of the eye and formed an aerial image between the ophthalmic and relay lenses. The aerial image was captured by the camera through the relay lens. An adequate illumination level was obtained to capture wide angle fundus images within ocular safety limits, defined by the ISO 15004-2: 2007 standard. This novel trans-palpebral illumination approach enables wide-angle fundus photography without eyeball contact and pupil dilation.

  7. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  8. Positioning in Time and Space - Cost-Effective Exterior Orientation for Airborne Archaeological Photographs

    NASA Astrophysics Data System (ADS)

    Verhoeven, G.; Wieser, M.; Briese, C.; Doneus, M.

    2013-07-01

    Since manned, airborne aerial reconnaissance for archaeological purposes is often characterised by more-or-less random photographing of archaeological features on the Earth, the exact position and orientation of the camera during image acquisition becomes very important in an effective inventorying and interpretation workflow of these aerial photographs. Although the positioning is generally achieved by simultaneously logging the flight path or directly recording the camera's position with a GNSS receiver, this approach does not allow to record the necessary roll, pitch and yaw angles of the camera. The latter are essential elements for the complete exterior orientation of the camera, which allows - together with the inner orientation of the camera - to accurately define the portion of the Earth recorded in the photograph. This paper proposes a cost-effective, accurate and precise GNSS/IMU solution (image position: 2.5 m and orientation: 2°, both at 1σ) to record all essential exterior orientation parameters for the direct georeferencing of the images. After the introduction of the utilised hardware, this paper presents the developed software that allows recording and estimating these parameters. Furthermore, this direct georeferencing information can be embedded into the image's metadata. Subsequently, the first results of the estimation of the mounting calibration (i.e. the misalignment between the camera and GNSS/IMU coordinate frame) are provided. Furthermore, a comparison with a dedicated commercial photographic GNSS/IMU solution will prove the superiority of the introduced solution. Finally, an outlook on future tests and improvements finalises this article.

  9. Still from Processed Movie of Zonal Jets

    NASA Image and Video Library

    2000-11-21

    This image is one frame from a movie clip of cloud motions on Jupiter, from the side of the planet opposite to the Great Red Spot. It was taken in the first week of October 2000 by the narrow-angle camera on NASA Cassini spacecraft,

  10. ARC-1989-A89-7004

    NASA Image and Video Library

    1989-08-19

    Range : 8.6 million kilometers (5.3 million miles) The Voyager took this 61 second exposure through the clear filter with the narrow angle camera of Neptune. The Voyager cameras were programmed to make a systematic search for faint ring arcs and new satellites. The bright upper corner of the image is due to a residual image from a previous long exposure of the planet. The portion of the arc visible here is approximately 35 degrees in longitudinal extent, making it approximately 38,000 kilometers (24,000 miles) in length, and is broken up into three segments separated from each other by approximately 5 degrees. The trailing edge is at the upper right and has an abrupt end while the leading edge seems to fade into the background more gradually. This arc orbits very close to one of the newly discovered Neptune satellites, 1989N4. Close-up studies of this ring arc will be carried out in the coming days which will give higher spatial resolution at different lighting angles. (JPL Ref: P-34617)

  11. Saskatchewan and Manitoba

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Surface brightness contrasts accentuated by a thin layer of snow enable a network of rivers, roads, and farmland boundaries to stand out clearly in these MISR images of southeastern Saskatchewan and southwestern Manitoba. The lefthand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The righthand image is a multi-angle false-color view made from the red band data of the 60-degree aftward camera, the nadir camera, and the 60-degree forward camera. In each image, the selected channels are displayed as red, green, and blue, respectively. The data were acquired April 17, 2001 during Terra orbit 7083, and cover an area measuring about 285 kilometers x 400 kilometers. North is at the top.

    The junction of the Assiniboine and Qu'Apelle Rivers in the bottom part of the images is just east of the Saskatchewan-Manitoba border. During the growing season, the rich, fertile soils in this area support numerous fields of wheat, canola, barley, flaxseed, and rye. Beef cattle are raised in fenced pastures. To the north, the terrain becomes more rocky and forested. Many frozen lakes are visible as white patches in the top right. The narrow linear, north-south trending patterns about a third of the way down from the upper right corner are snow-filled depressions alternating with vegetated ridges, most probably carved by glacial flow.

    In the lefthand image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the righthand image, several forested regions are clearly visible in green hues. Since this is a multi-angle composite, the green arises not from the color of the leaves but from the architecture of the surface cover. Progressing southeastward along the Manitoba Escarpment, the forested areas include the Pasquia Hills, the Porcupine Hills, Duck Mountain Provincial Park, and Riding Mountain National Park. The forests are brighter in the nadir than at the oblique angles, probably because more of the snow-covered surface is visible in the gaps between the trees. In contrast, the valley between the Pasquia and Porcupine Hills near the top of the images appears bright red in the lefthand image (indicating high vegetation abundance) but shows a mauve color in the multi-angle view. This means that it is darker in the nadir than at the oblique angles. Examination of imagery acquired after the snow has melted should establish whether this difference is related to the amount of snow on the surface or is indicative of a different type of vegetation structure.

    Saskatchewan and Manitoba are believed to derive their names from the Cree words for the winding and swift-flowing waters of the Saskatchewan River and for a narrows on Lake Manitoba where the roaring sound of wind and water evoked the voice of the Great Spirit. They are two of Canada's Prairie Provinces; Alberta is the third.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  12. Apollo 8 Mission image,Target of Opportunity (T/O) 10

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Moon,Target of Opportunity (T/O) 10, Various targets. Latitude 18 degrees South,Longitude 163.50 degrees West. Camera Tilt Mode: High Oblique. Direction: South. Sun Angle 12 degrees. Original Film Magazine was labeled E. Camera Data: 70mm Hasselblad; F-Stop: F-5.6; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Other Photographic Coverage: Lunar Orbiter 1 (LO I) S-3. Flight Date: December 21-27,1968.

  13. Hurricane Matthew over Haiti seen by NASA MISR

    NASA Image and Video Library

    2016-10-04

    On the morning of October 4, 2016, Hurricane Matthew passed over the island nation of Haiti. A Category 4 storm, it made landfall around 7 a.m. local time (5 a.m. PDT/8 a.m. EDT) with sustained winds over 145 mph. This is the strongest hurricane to hit Haiti in over 50 years. On October 4, at 10:30 a.m. local time (8:30 a.m. PDT/11:30 a.m. EDT), the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite passed over Hurricane Matthew. This animation was made from images taken by MISR's downward-pointing (nadir) camera is 235 miles (378 kilometers) across, which is much narrower than the massive diameter of Matthew, so only the hurricane's eye and a portion of the storm's right side are visible. Haiti is completely obscured by Matthew's clouds, but part of the Bahamas is visible to the north. Several hot towers are visible within the central part of the storm, and another at the top right of the image. Hot towers are enormous thunderheads that punch through the tropopause (the boundary between the lowest layer of the atmosphere, the troposphere, and the next level, the stratosphere). The rugged topography of Haiti causes uplift within the storm, generating these hot towers and fueling even more rain than Matthew would otherwise dump on the country. MISR has nine cameras fixed at different angles, which capture images of the same point on the ground within about seven minutes. This animation was created by blending images from these nine cameras. The change in angle between the images causes a much larger motion from south to north than actually exists, but the rotation of the storm is real motion. From this animation, you can get an idea of the incredible height of the hot towers, especially the one to the upper right. The counter-clockwise rotation of Matthew around its closed (cloudy) eye is also visible. These data were acquired during Terra orbit 89345. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21070

  14. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less

  16. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  17. Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)

    NASA Astrophysics Data System (ADS)

    Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.

    1993-01-01

    The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.

  18. Spray Above Enceladus

    NASA Image and Video Library

    2005-11-28

    A fine spray of small, icy particles emanating from the warm, geologically unique province surrounding the south pole of Saturn’s moon Enceladus was observed in a Cassini narrow-angle camera image of the crescent moon taken on Jan. 16, 2005. Taken from a high-phase angle of 148 degrees -- a viewing geometry in which small particles become much easier to see -- the plume of material becomes more apparent in images processed to enhance faint signals. Imaging scientists have measured the light scattered by the plume's particles to determine their abundance and fall-off with height. Though the measurements of particle abundance are more certain within 100 kilometers (60 miles) of the surface, the values measured there are roughly consistent with the abundance of water ice particles measured by other Cassini instruments (reported in September, 2005) at altitudes as high as 400 kilometers (250 miles) above the surface. Imaging scientists, as reported in the journal Science on March 10, 2006, believe that the jets are geysers erupting from pressurized subsurface reservoirs of liquid water above 273 degrees Kelvin (0 degrees Celsius). The image at the left was taken in visible green light. A dark mask was applied to the moon's bright limb in order to make the plume feature easier to see. The image at the right has been color-coded to make faint signals in the plume more apparent. Images of other satellites (such as Tethys and Mimas) taken in the last 10 months from similar lighting and viewing geometries, and with identical camera parameters as this one, were closely examined to demonstrate that the plume towering above Enceladus' south pole is real and not a camera artifact. The images were acquired at a distance of about 209,400 kilometers (130,100 miles) from Enceladus. Image scale is about 1 kilometer (0.6 mile) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA07760

  19. Coordinates of anthropogenic features on the Moon

    NASA Astrophysics Data System (ADS)

    Wagner, R. V.; Nelson, D. M.; Plescia, J. B.; Robinson, M. S.; Speyerer, E. J.; Mazarico, E.

    2017-02-01

    High-resolution images from the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) reveal the landing locations of recent and historic spacecraft and associated impact sites across the lunar surface. Using multiple images of each site acquired between 2009 and 2015, an improved Lunar Reconnaissance Orbiter (LRO) ephemeris, and a temperature-dependent camera orientation model, we derived accurate coordinates (<12 m) for each soft-landed spacecraft, rover, deployed scientific payload, and spacecraft impact crater that we have identified. Accurate coordinates enhance the scientific interpretations of data returned by the surface instruments and of returned samples of the Apollo and Luna sites. In addition, knowledge of the sizes and positions of craters formed as the result of impacting spacecraft provides key benchmarks into the relationship between energy and crater size, as well as calibration points for reanalyzing seismic measurements acquired during the Apollo program. We identified the impact craters for the three spacecraft that impacted the surface during the LRO mission by comparing before and after NAC images.

  20. Coordinates of Anthropogenic Features on the Moon

    NASA Technical Reports Server (NTRS)

    Wagner, R. V.; Nelson, D. M.; Plescia, J. B.; Robinson, M. S.; Speyerer , E. J.; Mazarico, E.

    2016-01-01

    High-resolution images from the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) reveal the landing locations of recent and historic spacecraft and associated impact sites across the lunar surface. Using multiple images of each site acquired between 2009 and 2015, an improved Lunar Reconnaissance Orbiter (LRO) ephemeris, and a temperature-dependent camera orientation model, we derived accurate coordinates ( less than 12 meters) for each soft-landed spacecraft, rover, deployed scientific payload, and spacecraft impact crater that we have identified. Accurate coordinates enhance the scientific interpretations of data returned by the surface instruments and of returned samples of the Apollo and Luna sites. In addition, knowledge of the sizes and positions of craters formed as the result of impacting spacecraft provides key benchmarks into the relationship between energy and crater size, as well as calibration points for reanalyzing seismic measurements acquired during the Apollo program. We identified the impact craters for the three spacecraft that impacted the surface during the LRO mission by comparing before and after NAC images.

  1. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  2. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  3. A multi-cone x-ray imaging Bragg crystal spectrometer

    DOE PAGES

    Bitter, M.; Hill, K. W.; Gao, Lan; ...

    2016-08-26

    This article describes a new x-ray imaging Bragg crystal spectrometer, which—in combination with a streak camera or a gated strip detector—can be used for time-resolved measurements of x-ray line spectra at the National Ignition Facility and other high power laser facilities. The main advantage of this instrument is that it produces perfect images of a point source for each wavelength in a selectable spectral range and that the detector plane can be perpendicular to the crystal surface or inclined by an arbitrary angle with respect to the crystal surface. Furthermore, these unique imaging properties are obtained by bending the x-raymore » diffracting crystal into a certain shape, which is generated by arranging multiple cones with different aperture angles on a common nodal line.« less

  4. A Glimpse of Atlas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Saturn's little moon Atlas orbits Saturn between the outer edge of the A ring and the fascinating, twisted F ring. This image just barely resolves the disk of Atlas, and also shows some of the knotted structure for which the F ring is known. Atlas is 32 kilometers (20 miles) across.

    The bright outer edge of the A ring is overexposed here, but farther down the image several bright ring features can be seen.

    The image was taken in visible light with the Cassini spacecraft narrow-angle camera on April 25, 2005, at a distance of approximately 2.4 million kilometers (1.5 million miles) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 60 degrees. Resolution in the original image was 14 kilometers (9 miles) per pixel.

  5. A 3D gantry single photon emission tomograph with hemispherical coverage for dedicated breast imaging

    NASA Astrophysics Data System (ADS)

    Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.

    2003-01-01

    A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.

  6. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  7. Plume propagation direction determination with SO2 cameras

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich

    2017-03-01

    SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.

  8. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  9. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.

  10. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  11. Afocal viewport optics for underwater imaging

    NASA Astrophysics Data System (ADS)

    Slater, Dan

    2014-09-01

    A conventional camera can be adapted for underwater use by enclosing it in a sealed waterproof pressure housing with a viewport. The viewport, as an optical interface between water and air needs to consider both the camera and water optical characteristics while also providing a high pressure water seal. Limited hydrospace visibility drives a need for wide angle viewports. Practical optical interfaces between seawater and air vary from simple flat plate windows to complex water contact lenses. This paper first provides a brief overview of the physical and optical properties of the ocean environment along with suitable optical materials. This is followed by a discussion of the characteristics of various afocal underwater viewport types including flat windows, domes and the Ivanoff corrector lens, a derivative of a Galilean wide angle camera adapter. Several new and interesting optical designs derived from the Ivanoff corrector lens are presented including a pair of very compact afocal viewport lenses that are compatible with both in water and in air environments and an afocal underwater hyper-hemispherical fisheye lens.

  12. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  13. PIA01492

    NASA Image and Video Library

    1998-10-30

    This picture of Neptune was produced from the last whole planet images taken through the green and orange filters on NASA's Voyager 2 narrow angle camera. The images were taken at a range of 4.4 million miles from the planet, 4 days and 20 hours before closest approach. The picture shows the Great Dark Spot and its companion bright smudge; on the west limb the fast moving bright feature called Scooter and the little dark spot are visible. These clouds were seen to persist for as long as Voyager's cameras could resolve them. North of these, a bright cloud band similar to the south polar streak may be seen. http://photojournal.jpl.nasa.gov/catalog/PIA01492

  14. Optical image of a cometary nucleus: 1980 flyby of Comet Encke

    NASA Technical Reports Server (NTRS)

    Wells, W. C.; Benson, R. S.; Anderson, A. D.; Gal, G.

    1974-01-01

    The feasibility was investigated of obtaining optical images of a cometary nucleus via a flyby of Comet Encke. A physical model of the dust cloud surrounding the nucleus was developed by using available physical data and theoretical knowledge of cometary physics. Using this model and a Mie scattering code, calculations were made of the absolute surface brightness of the dust in the line of sight of the on-board camera and the relative surface brightness of the dust compared to the nucleus. The brightness was calculated as a function of heliocentric distance and for different phase angles (sun-comet-spacecraft angle).

  15. An Astrometric Observation of Binary Star System WDS 15559-0210 at the Great Basin Observatory

    NASA Astrophysics Data System (ADS)

    Musegades, Lila; Niebuhr, Cole; Graham, Mackenzie; Poore, Andrew; Freed, Rachel; Kenney, John; Genet, Russell

    2018-04-01

    Researchers at Concordia University Irvine measured the position angle and separation of the double star system WDS 15559-0210 using a SBIG STX-16803 CCD camera on the PlaneWave 0.7-m CDK 700 telescope at the Great Basin Observatory. Images of the binary star system were measured using AstroImageJ software. Twenty observations of WDS 15559-0210 were measured and analyzed. The calculated mean resulted in a position angle of 345.95° and a separation of 5.94". These measurements were consistent with the previous values for this binary system listed in the Washington Double Star Catalog.

  16. A simulation of orientation dependent, global changes in camera sensitivity in ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieszk, J.A.; Hawman, E.G.; Malmin, R.E.

    1984-01-01

    ECT promises the abilities to: 1) observe radioisotope distributions in a patient without the summation of overlying activity to reduce contrast, and 2) measure quantitatively these distributions to further and more accurately assess organ function. Ideally, camera-based ECT systems should have a performance that is independent of camera orientation or gantry angle. This study is concerned with ECT quantitation errors that can arise from angle-dependent variations of camera sensitivity. Using simulated phantoms representative of heart and liver sections, the effects of sensitivity changes on reconstructed images were assessed both visually and quantitatively based on ROI sums. The sinogram for eachmore » test image was simulated with 128 linear digitization and 180 angular views. The global orientation-dependent sensitivity was modelled by applying an angular sensitivity dependence to the sinograms of the test images. Four sensitivity variations were studied. Amplitudes of 0% (as a reference), 5%, 10%, and 25% with a costheta dependence were studied as well as a cos2theta dependence with a 5% amplitude. Simulations were done with and without Poisson noise to: 1) determine trends in the quantitative effects as a function of the magnitude of the variation, and 2) to see how these effects are manifested in studies having statistics comparable to clinical cases. For the most realistic sensitivity variation (costheta, 5% ampl.), the ROIs chosen in the present work indicated changes of <0.5% in the noiseless case and <5% for the case with Poisson noise. The effects of statistics appear to dominate any effects due to global, sinusoidal, orientation-dependent sensitivity changes in the cases studied.« less

  17. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    PubMed

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  18. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins

    PubMed Central

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N. Samba; Gopalaswamy, Arjun M.; Karanth, K. Ullas

    2009-01-01

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin. PMID:19324633

  19. NASA MISR Tracks Growth of Rift in the Larsen C Ice Shelf

    NASA Image and Video Library

    2017-04-11

    A rift in Antarctica's Larsen C ice shelf has grown to 110 miles (175 km) long, making it inevitable that an iceberg larger than Rhode Island will soon calve from the ice shelf. Larsen C is the fourth largest ice shelf in Antarctica, with an area of almost 20,000 square miles (50,000 square kilometers). The calving event will remove approximately 10 percent of the ice shelf's mass, according to the Project for Impact of Melt on Ice Shelf Dynamics and Stability (MIDAS), a UK-based team studying the ice shelf. Only 12 miles (20 km) of ice now separates the end of the rift from the ocean. The rift has grown at least 30 miles (50 km) in length since August, but appears to be slowing recently as Antarctica returns to polar winter. Project MIDAS reports that the calving event might destabilize the ice shelf, which could result in a collapse similar to what occurred to the Larsen B ice shelf in 2002. The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite captured views of Larsen C on August 22, 2016, when the rift was 80 miles (130 km) in length; December 8, 2016, when the rift was approximately 90 miles (145 km) long; and April 6, 2017. The MISR instrument has nine cameras, which view the Earth at different angles. The overview image, from December 8, shows the entire Antarctic Peninsula -- home to Larsen A, B, and C ice shelves -- in natural color (similar to how it would appear to the human eye) from MISR's vertical-viewing camera. Combining information from several MISR cameras pointed at different angles gives information about the texture of the ice. The accompanying GIF depicts the inset area shown on the larger image and displays data from all three dates in false color. These multiangular views -- composited from MISR's 46-degree backward-pointing camera, the nadir (vertical-viewing) camera, and the 46-degree forward-pointing camera -- represent variations in ice texture as changes in color, such that areas of rough ice appear orange and smooth ice appears blue. The Larsen C shelf is on the left in the GIF, bordered by the Weddell Sea on the upper right. The ice within the rift is orange, indicating movement, and the end of the rift can be tracked across the shelf between images. In addition, between December and April, the rift widened, pushing the future iceberg away from the shelf at its southern end. These data were acquired during Terra orbits 88717, 90290 and 92023. https://photojournal.jpl.nasa.gov/catalog/PIA21581

  20. 13. 22'X34' original vellum, VariableAngle Launcher, 'SIDEVIEW CAMERA CAR TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. 22'X34' original vellum, Variable-Angle Launcher, 'SIDEVIEW CAMERA CAR TRACK DETAILS' drawn at 1/4'=1'-0' (BUORD Sketch # 208078, PAPW 908). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  1. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  2. Light field analysis and its applications in adaptive optics and surveillance systems

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed Ali

    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.

  3. Measuring high-resolution sky luminance distributions with a CCD camera.

    PubMed

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  4. Small Wonders

    NASA Image and Video Library

    2017-06-28

    This montage of views from NASA's Cassini spacecraft shows three of Saturn's small ring moons: Atlas, Daphnis and Pan at the same scale for ease of comparison. Two differences between Atlas and Pan are obvious in this montage. Pan's equatorial band is much thinner and more sharply defined, and the central mass of Atlas (the part underneath the smooth equatorial band) appears to be smaller than that of Pan. Images of Atlas and Pan taken using infrared, green and ultraviolet spectral filters were combined to create enhanced-color views, which highlight subtle color differences across the moons' surfaces at wavelengths not visible to human eyes. (The Daphnis image was colored using the same green filter image for all three color channels, adjusted to have a realistic appearance next to the other two moons.) All of these images were taken using the Cassini spacecraft narrow-angle camera. The images of Atlas were acquired on April 12, 2017, at a distance of 10,000 miles (16,000 kilometers) and at a sun-moon-spacecraft angle (or phase angle) of 37 degrees. The images of Pan were taken on March 7, 2017, at a distance of 16,000 miles (26,000 kilometers) and a phase angle of 21 degrees. The Daphnis image was obtained on Jan. 16, 2017, at a distance of 17,000 miles (28,000 kilometers) and at a phase angle of 71 degrees. All images are oriented so that north is up. A monochrome version is available at https://photojournal.jpl.nasa.gov/catalog/PIA21449

  5. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  6. Low-speed flowfield characterization by infrared measurements of surface temperatures

    NASA Technical Reports Server (NTRS)

    Gartenberg, E.; Roberts, A. S., Jr.; Mcree, G. J.

    1989-01-01

    An experimental program was aimed at identifying areas in low speed aerodynamic research where infrared imaging systems can make significant contributions. Implementing a new technique, a long electrically heated wire was placed across a laminar jet. By measuring the temperature distribution along the wire with the IR imaging camera, the flow behavior was identified. Furthermore, using Nusselt number correlations, the velocity distribution could be deduced. The same approach was used to survey wakes behind cylinders in a wind-tunnel. This method is suited to investigate flows with position dependent velocities, e.g., boundary layers, confined flows, jets, wakes, and shear layers. It was found that the IR imaging camera cannot accurately track high gradient temperature fields. A correlation procedure was devised to account for this limitation. Other wind-tunnel experiments included tracking the development of the laminar boundary layer over a warmed flat plate by measuring the chordwise temperature distribution. This technique was applied also to the flow downstream from a rearward facing step. Finally, the IR imaging system was used to study boundary layer behavior over an airfoil at angles of attack from zero up to separation. The results were confirmed with tufts observable both visually and with the IR imaging camera.

  7. The role of mental rotation and memory scanning on the performance of laparoscopic skills: a study on the effect of camera rotational angle.

    PubMed

    Conrad, J; Shah, A H; Divino, C M; Schluender, S; Gurland, B; Shlasko, E; Szold, A

    2006-03-01

    The rotational angle of the laparoscopic image relative to the true horizon has an unknown influence on performance in laparoscopic procedures. This study evaluates the effect of increasing rotational angle on surgical performance. Surgical residents (group 1) (n = 6) and attending surgeons (group 2) (n = 4) were tested on two laparoscopic skills. The tasks consisted of passing a suture through an aperture, and laparoscopic knot tying. These tasks were assessed at 15 degrees intervals between 0 degrees and 90 degrees , on three consecutive repetitions. The participant's performance was evaluated based on the time required to complete the tasks and number of errors incurred. There was an increasing deterioration in suturing performance as the degree of image rotation was increased. Participants showed a statistically significant 20-120% progressive increase in time to completion of the tasks (p = 0.004), with error rates increasing from 10% to 30% (p = 0.04) as the angle increased from 0 degrees to 90 degrees. Knot-tying performance similarly showed a decrease in performance that was evident in the less experienced surgeons (p = 0.02) but with no obvious effect on the advanced laparoscopic surgeons. When evaluated independently and as a group, both novice and experienced laparoscopic surgeons showed significant prolongation to completion of suturing tasks with increased errors as the rotational angle increased. The knot-tying task shows that experienced surgeons may be able to overcome rotational effects to some extent. This is consistent with results from cognitive neuroscience research evaluating the processing of directional information in spatial motor tasks. It appears that these tasks utilize the time-consuming processes of mental rotation and memory scanning. Optimal performance during laparoscopic procedures requires that the rotation of the camera, and thus the image, be kept to a minimum to maintain a stable horizon. New technology that corrects the rotational angle may benefit the surgeon, decrease operating time, and help to prevent adverse outcomes.

  8. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Nakabayashi, S; Kumagai, S

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less

  9. Uav Borne Low Altitude Photogrammetry System

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Su, G.; Xie, F.

    2012-07-01

    In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.

  10. Impact Site: Cassini's Final Image

    NASA Image and Video Library

    2017-09-15

    This monochrome view is the last image taken by the imaging cameras on NASA's Cassini spacecraft. It looks toward the planet's night side, lit by reflected light from the rings, and shows the location at which the spacecraft would enter the planet's atmosphere hours later. A natural color view, created using images taken with red, green and blue spectral filters, is also provided (Figure 1). The imaging cameras obtained this view at approximately the same time that Cassini's visual and infrared mapping spectrometer made its own observations of the impact area in the thermal infrared. This location -- the site of Cassini's atmospheric entry -- was at this time on the night side of the planet, but would rotate into daylight by the time Cassini made its final dive into Saturn's upper atmosphere, ending its remarkable 13-year exploration of Saturn. The view was acquired on Sept. 14, 2017 at 19:59 UTC (spacecraft event time). The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 394,000 miles (634,000 kilometers) from Saturn. Image scale is about 11 miles (17 kilometers). The original image has a size of 512x512 pixels. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21895

  11. Limbus Impact on Off-angle Iris Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    The accuracy of iris recognition depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. Off-angle iris recognition is a new research focus in biometrics that tries to address several issues including corneal refraction, complex 3D iris texture, and blur. In this paper, we present an additional significant challenge that degrades the performance of the off-angle iris recognition systems, called the limbus effect . The limbus is the region at the border of the cornea where the cornea joins the sclera. The limbus is a semitransparent tissue that occludes amore » side portion of the iris plane. The amount of occluded iris texture on the side nearest the camera increases as the image acquisition angle increases. Without considering the role of the limbus effect, it is difficult to design an accurate off-angle iris recognition system. To the best of our knowledge, this is the first work that investigates the limbus effect in detail from a biometrics perspective. Based on results from real images and simulated experiments with real iris texture, the limbus effect increases the hamming distance score between frontal and off-angle iris images ranging from 0.05 to 0.2 depending upon the limbus height.« less

  12. Oil Fire Plumes Over Baghdad

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.

    The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  13. Lunar Reconnaissance Orbiter Data Enable Science and Terrain Analysis of Potential Landing Sites in South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.

    2017-12-01

    Exploring the South Pole-Aitken basin (SPA), one of the key unsampled geologic terranes on the Moon, is a high priority for Solar System science. As the largest and oldest recognizable impact basin on the Moon, it anchors the heavy bombardment chronology. It is thus a key target for sample return to better understand the impact flux in the Solar System between formation of the Moon and 3.9 Ga when Imbrium, one of the last of the great lunar impact basins, formed. Exploration of SPA has implications for understanding early habitable environments on the terrestrial planets. Global mineralogical and compositional data exist from the Clementine UV-VIS camera, the Lunar Prospector Gamma Ray Spectrometer, the Moon Mineralogy Mapper (M3) on Chandrayaan-1, the Chang'E-1 Imaging Interferometer, the spectral suite on SELENE, and the Lunar Reconnaissance Orbiter Cameras (LROC) Wide Angle Camera (WAC) and Diviner thermal radiometer. Integration of data sets enables synergistic assessment of geology and distribution of units across multiple spatial scales. Mineralogical assessment using hyperspectral data indicates spatial relationships with mineralogical signatures, e.g., central peaks of complex craters, consistent with inferred SPA basin structure and melt differentiation (Moriarty & Pieters, 2015, JGR-P 118). Delineation of mare, cryptomare, and nonmare surfaces is key to interpreting compositional mixing in the formation of SPA regolith to interpret remotely sensed data, and for scientific assessment of landing sites. LROC Narrow Angle Camera (NAC) images show the location and distribution of >0.5 m boulders and fresh craters that constitute the main threats to automated landers and thus provide critical information for landing site assessment and planning. NAC images suitable for geometric stereo derivation and digital terrain models so derived, controlled with Lunar Orbiter Laser Altimeter (LOLA) data, and oblique NAC images made with large slews of the spacecraft, are crucial to both scientific and landing-site assessments. These images, however, require favorable illumination and significant spacecraft resources. Thus they make up only a small percentage of all of the images taken. It is essential for future exploration to support LRO continued operation for these critical datasets.

  14. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  15. A two-angle far-field microscope imaging technique for spray flows

    NASA Astrophysics Data System (ADS)

    Kourmatzis, Agisilaos; Pham, Phuong X.; Masri, Assaad R.

    2017-03-01

    Backlight imaging is frequently used for the visualization of multiphase flows, where with appropriate microscope lenses, quantitative information on the spray structure can be attained. However, a key issue resides in the nature of the measurement which relies on a single viewing angle, hence preventing imaging of all liquid structures and features, such as those located behind other fragments. This paper presents results from an extensive experimental study aimed as a step forward towards resolving this problem by using a pair of high speed cameras oriented at 90 degrees to each other, and synchronized to two high-speed diode lasers. Both cameras are used with long distance microscope lenses. The images are processed as pairs allowing for identification and classification of the same liquid structure from two perspectives at high temporal (5 kHz) and spatial resolution (∼3 μm). Using a controlled mono-disperse spray, simultaneous, time-resolved visualization of the same spherical object being focused on one plane while de-focused on the other plane 90 degrees to the first has allowed for a quantification of shot-to-shot defocused size measurement error. An extensive error analysis is performed for spheroidal structures imaged from two angles and the dual angle technique is extended to measure the volume of non-spherical fragments for the first time, by ‘discretising’ a fragment into a number of constituent ellipses. Error analysis is performed based on measuring the known volumes of solid arbitrary shapes, and volume estimates were found to be within  ∼11% of the real volume for representative ‘ligament-like’ shapes. The contribution concludes by applying the ellipsoidal method to a real spray consisting of multiple non-spherical fragments. This extended approach clearly demonstrates potential to yield novel volume weighted quantities of non-spherical objects in turbulent multiphase flow applications.

  16. Spectral methods to detect cometary minerals with OSIRIS on board Rosetta

    NASA Astrophysics Data System (ADS)

    Oklay, N.; Vincent, J.-B.; Sierks, H.

    2013-09-01

    Comet 67P/Churyumov-Gerasimenko is going to be observed by the OSIRIS scientific imager (Keller et al. 2007) on board ESA's spacecraft Rosetta in the wavelength range of 250-1000 nm with a combination of 12 filters for the narrow angle camera (NAC) and 14 combination of 12 filters for the narrow angle camera (NAC) and 14 filters in the wavelength range of 240-720 nm for the wide angle camera (WAC). NAC filters are suitable to surface composition studies, while WAC filters are designed for gas and radical emission studies. In order to investigate the composition of the comet surface from the observed images, we need to understand how to detect different minerals and which compositional information can be derived from the NAC filters. Therefore, the most common cometary silicates e.g. enstatite, forsterite are investigated with two hydrated silicates (serpentine and smectite) for the determina- tion of the spectral methods. Laboratory data of those selected minerals are collected from RELAB database (http://www.planetary.brown.edu/relabdocs/relab.htm) and absolute spectra of the minerals observed by OSIRIS NAC filters are calculated. Due to the limited spectral range of the laboratory data, Far-UV and Neutral density filters of NAC are excluded from this analysis. Considered NAC filters in this study are represented in Table 1 and the number of collected laboratory data are presented in Table 2. Detection and separation of the minerals will not only allow us to study the surface composition but also to study observed composition changes due to the cometary activity during the mission.

  17. Improved high-throughput quantification of luminescent microplate assays using a common Western-blot imaging system.

    PubMed

    Hawkins, Liam J; Storey, Kenneth B

    2017-01-01

    Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.

  18. Colorful Saturn, Getting Closer

    NASA Image and Video Library

    2004-06-03

    As Cassini coasts into the final month of its nearly seven-year trek, the serene majesty of its destination looms ahead. The spacecraft's cameras are functioning beautifully and continue to return stunning views from Cassini's position, 1.2 billion kilometers (750 million miles) from Earth and now 15.7 million kilometers (9.8 million miles) from Saturn. In this narrow angle camera image from May 21, 2004, the ringed planet displays subtle, multi-hued atmospheric bands, colored by yet undetermined compounds. Cassini mission scientists hope to determine the exact composition of this material. This image also offers a preview of the detailed survey Cassini will conduct on the planet's dazzling rings. Slight differences in color denote both differences in ring particle composition and light scattering properties. Images taken through blue, green and red filters were combined to create this natural color view. The image scale is 132 kilometers (82 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA06060

  19. Viking Imaging of Phobos and Deimos: An Overview of the Primary Mission

    NASA Technical Reports Server (NTRS)

    Duxbury, T. C.; Veverka, J.

    1977-01-01

    During the Viking primary mission the cameras on the two orbiters acquired about 50 pictures of the two Martian moons. The Viking images of the satellites have a higher surface resolution than those obtained by Mariner 9. The typical surface resolution achieved was 100-200 m, although detail as small as 40 m was imaged on Phobos during a particularly close passage. Attention is given to color sequences obtained for each satellite, aspects of phase angle coverage, and pictures for ephemeris improvement.

  20. Hologram production and representation for corrected image

    NASA Astrophysics Data System (ADS)

    Jiao, Gui Chao; Zhang, Rui; Su, Xue Mei

    2015-12-01

    In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  2. Controlled photomosaic map of Callisto JC 15M CMN

    USGS Publications Warehouse

    ,

    2002-01-01

    This sheet is one in a series of maps of the Galilean satellites of Jupiter at a nominal scale of 1:15,000,000. This series is based on data from the Galileo Orbiter Solid-State Imaging (SSI) camera and the cameras of the Voyager 1 and 2 spacecraft. Mercator and Polar Stereographic projections used for this map of Callisto are based on a sphere having a radius of 2,409.3 km. The scale is 1:8,388,000 at ±56° latitude for both projections. Longitude increases to the west in accordance with the International Astronomical Union (1971) (Seidelmann and others, 2002). The geometric control network was computed at the RAND Corporation using RAND's most recent solution as of April 1999 (Davies and Katayama, 1981; Davies and others, 1998). This process involved selecting control points on the individual images, making pixel measurements of their locations, using reseau locations to correct for geometric distortions, and converting the measurements to millimeters in the focal plane. These data are combined with the camera focal lengths and navigation solutions as input to photogrammetric triangulation software that solves for the best-fit sphere, the coordinates of the control points, the three orientation angles of the camera at each exposure (right ascension, declination, and twist), and an angle (W0) which defines the orientation of Callisto in space. W0-in this solution 259.51°-is the angle along the equator to the east, between the 0° meridian and the equator's intersection with the celestial equator at the standard epoch J2000.0. This solution places the crater Saga at its defined longitude of 326° west (Seidelmann and others, 2002). This global map base uses the best image quality and moderate resolution coverage supplied by Galileo SSI and Voyager 1 and 2 (Batson, 1987; Becker and others, 1998; Becker and others, 1999; Becker and others, 2001). The digital map was produced using Integrated Software for Imagers and Spectrometers (ISIS) (Eliason, 1997; Gaddis and others, 1997; Torson and Becker, 1997). The individual images were radiometrically calibrated and photometrically normalized using a Lunar-Lambert function with empirically derived values (McEwen, 1991; Kirk and others, 2000). A linear correction based on the statistics of all overlapping areas was then applied to minimize image brightness variations. The image data were selected on the basis of overall image quality, reasonable original input resolution (from 20 km/pixel for gap fill to as much as 150 m/pixel), and availability of moderate emission/incidence angles for topography. Although consistency was achieved where possible, different filters were included for global image coverage as necessary: clear for Voyager 1 and 2; clear and green (559 nm) for Galileo SSI. Individual images were projected to a Sinusoidal Equal-Area projection at an image resolution of 1.0 kilometer/pixel. The final constructed Sinusoidal projection mosaic was then reprojected to the Mercator and Polar Stereographic projections included on this sheet. The final mosaic was enhanced using commercial software. Names on this sheet are approved by the International Astronomical Union. Names have been applied for features clearly visible at the scale of this map; for a complete list of nomenclature for Callisto, please see the Gazeteer of Planetary Nomenclature. Font color was chosen only for readability.

  3. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  4. Mechanism controller system for the optical spectroscopic and infrared remote imaging system instrument on board the Rosetta space mission

    NASA Astrophysics Data System (ADS)

    Castro Marín, J. M.; Brown, V. J. G.; López Jiménez, A. C.; Rodríguez Gómez, J.; Rodrigo, R.

    2001-05-01

    The optical, spectroscopic infrared remote imaging system (OSIRIS) is an instrument carried on board the European Space Agency spacecraft Rosetta that will be launched in January 2003 to study in situ the comet Wirtanen. The electronic design of the mechanism controller board (MCB) system of the two OSIRIS optical cameras, the narrow angle camera, and the wide angle camera, is described here. The system is comprised of two boards mounted on an aluminum frame as part of an electronics box that contains the power supply and the digital processor unit of the instrument. The mechanisms controlled by the MCB for each camera are the front door assembly and a filter wheel assembly. The front door assembly for each camera is driven by a four phase, permanent magnet stepper motor. Each filter wheel assembly consists of two, eight filter wheels. Each wheel is driven by a four phase, variable reluctance stepper motor. Each motor, for all the assemblies, also contains a redundant set of four stator phase windings that can be energized separately or in parallel with the main windings. All stepper motors are driven in both directions using the full step unipolar mode of operation. The MCB also performs general housekeeping data acquisition of the OSIRIS instrument, i.e., mechanism position encoders and temperature measurements. The electronic design application used is quite new due to use of a field programmable gate array electronic devices that avoid the use of the now traditional system controlled by microcontrollers and software. Electrical tests of the engineering model have been performed successfully and the system is ready for space qualification after environmental testing. This system may be of interest to institutions involved in future space experiments with similar needs for mechanisms control.

  5. DYI digital holography

    NASA Astrophysics Data System (ADS)

    Zacharovas, Stanislovas; Nikolskij, Andrej; Kuchin, Jevgenij

    2011-02-01

    We have created a programming tool which uses image data provided by webcam connected to personal computer and gives user an ability to see the future digital hologram preview on his computer screen, before sending video data to holographic printing companies. In order to print digital hologram, one needs to have a sequence of images of the same scene taken from different angles and nowadays web cameras - stand-alone or incorporated into mobile computer, can be an acceptable source of such image sequences. In this article we are describing this DIY holographic imaging process in details.

  6. Smoke from Fires in Southern Mexico

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On May 2, 2002, numerous fires in southern Mexico sent smoke drifting northward over the Gulf of Mexico. These views from the Multi-angle Imaging SpectroRadiometer illustrate the smoke extent over parts of the Gulf and the southern Mexican states of Tabasco, Campeche and Chiapas. At the same time, dozens of other fires were also burning in the Yucatan Peninsula and across Central America. A similar situation occurred in May and June of 1998, when Central American fires resulted in air quality warnings for several U.S. States.

    The image on the left is a natural color view acquired by MISR's vertical-viewing (nadir) camera. Smoke is visible, but sunglint in some ocean areas makes detection difficult. The middle image, on the other hand, is a natural color view acquired by MISR's 70-degree backward-viewing camera; its oblique view angle simultaneously suppresses sunglint and enhances the smoke. A map of aerosol optical depth, a measurement of the abundance of atmospheric particulates, is provided on the right. This quantity is retrieved using an automated computer algorithm that takes advantage of MISR's multi-angle capability. Areas where no retrieval occurred are shown in black.

    The images each represent an area of about 380 kilometers x 1550 kilometers and were captured during Terra orbit 12616.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  7. New NASA Images of Irma's Towering Clouds

    NASA Image and Video Library

    2017-09-08

    On Sept. 7, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite passed over Hurricane Irma at approximately 11:20 a.m. local time. The MISR instrument comprises nine cameras that view the Earth at different angles, and since it takes roughly seven minutes for all nine cameras to capture the same location, the motion of the clouds between images allows scientists to calculate the wind speed at the cloud tops. The animated GIF shows Irma's motion over the seven minutes of the MISR imagery. North is toward the top of the image. This composite image shows Hurricane Irma as viewed by the central, downward-looking camera (left), as well as the wind speeds (right) superimposed on the image. The length of the arrows is proportional to the wind speed, while their color shows the altitude at which the winds were calculated. At the time the image was acquired, Irma's eye was located approximately 60 miles (100 kilometers) north of the Dominican Republic and 140 miles (230 kilometers) north of its capital, Santo Domingo. Irma was a powerful Category 5 hurricane, with wind speeds at the ocean surface up to 185 miles (300 kilometers) per hour, according to the National Oceanic and Atmospheric Administration. The MISR data show that at cloud top, winds near the eye wall (the most destructive part of the storm) were approximately 90 miles per hour (145 kilometers per hour), and the maximum cloud-top wind speed throughout the storm calculated by MISR was 135 miles per hour (220 kilometers per hour). While the hurricane's dominant rotation direction is counter-clockwise, winds near the eye wall are consistently pointing outward from it. This is an indication of outflow, the process by which a hurricane draws in warm, moist air at the surface and ejects cool, dry air at its cloud tops. These data were captured during Terra orbit 94267. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21946

  8. Summer Harvest in Saratov, Russia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Russia's Saratov Oblast (province) is located in the southeastern portion of the East-European plain, in the Lower Volga River Valley. Southern Russia produces roughly 40 percent of the country's total agricultural output, and Saratov Oblast is the largest producer of grain in the Volga region. Vegetation changes in the province's agricultural lands between spring and summer are apparent in these images acquired on May 31 and July 18, 2002 (upper and lower image panels, respectively) by the Multi-angle Imaging SpectroRadiometer (MISR).

    The left-hand panels are natural color views acquired by MISR's vertical-viewing (nadir) camera. Less vegetation and more earth tones (indicative of bare soils) are apparent in the summer image (lower left). Farmers in the region utilize staggered sowing to help stabilize yields, and a number of different stages of crop maturity can be observed. The main crop is spring wheat, cultivated under non-irrigated conditions. A short growing season and relatively low and variable rainfall are the major limitations to production. Saratov city is apparent as the light gray pixels on the left (west) bank of the Volga River. Riparian vegetation along the Volga exhibits dark green hues, with some new growth appearing in summer.

    The right-hand panels are multi-angle composites created with red band data from MISR's 60-degree backward, nadir and 60-degree forward-viewing cameras displayed as red, green and blue respectively. In these images, color variations serve as a proxy for changes in angular reflectance, and the spring and summer views were processed identically to preserve relative variations in brightness between the two dates. Urban areas and vegetation along the Volga banks look similar in the two seasonal multi-angle composites. The agricultural areas, on the other hand, look strikingly different. This can be attributed to differences in brightness and texture between bare soil and vegetated land. The chestnut-colored soils in this region are brighter in MISR's red band than the vegetation. Because plants have vertical structure, the oblique cameras observe a greater proportion of vegetation relative to the nadir camera, which sees more soil. In spring, therefore, the scene is brightest in the vertical view and thus appears with an overall greenish hue. In summer, the soil characteristics play a greater role in governing the appearance of the scene, and the angular reflectance is now brighter at the oblique view angles (displayed as red and blue), thus imparting a pink color to much of the farmland and a purple color to areas along the banks of several narrow rivers. The unusual appearance of the clouds is due to geometric parallax which splits the imagery into spatially separated components as a consequence of their elevation above the surface.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. These images are a portion of the data acquired during Terra orbits 13033 and 13732, and cover an area of about 173 kilometers x 171 kilometers. They utilize data from blocks 49 to 50 within World Reference System-2 path 170.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  9. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  10. SU-E-J-17: A Study of Accelerator-Induced Cerenkov Radiation as a Beam Diagnostic and Dosimetry Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bateman, F; Tosh, R

    2014-06-01

    Purpose: To investigate accelerator-induced Cerenkov radiation imaging as a possible beam diagnostic and medical dosimetry tool. Methods: Cerenkov emission produced by clinical accelerator beams in a water phantom was imaged using a camera system comprised of a high-sensitivity thermoelectrically-cooled CCD camera coupled to a large aperture (f/0.75) objective lens with 16:1 magnification. This large format lens allows a significant amount of the available Cerenkov light to be collected and focused onto the CCD camera to form the image. Preliminary images, obtained with 6 MV photon beams, used an unshielded camera mounted horizontally with the beam normal to the water surface,more » and confirmed the detection of Cerenkov radiation. Several improvements were subsequently made including the addition of radiation shielding around the camera, and altering of the beam and camera angles to give a more favorable geometry for Cerenkov light collection. A detailed study was then undertaken over a range of electron and photon beam energies and dose rates to investigate the possibility of using this technique for beam diagnostics and dosimetry. Results: A series of images were obtained at a fixed dose rate over a range of electron energies from 6 to 20 MeV. The location of maximum intensity was found to vary linearly with the energy of the beam. A linear relationship was also found between the light observed from a fixed point on the central axis and the dose rate for both photon and electron beams. Conclusion: We have found that the analysis of images of beam-induced Cerenkov light in a water phantom has potential for use as a beam diagnostic and medical dosimetry tool. Our future goals include the calibration of the light output in terms of radiation dose and development of a tomographic system for 3D Cerenkov imaging in water phantoms and other media.« less

  11. The ExoMars PanCam Instrument

    NASA Astrophysics Data System (ADS)

    Griffiths, Andrew; Coates, Andrew; Muller, Jan-Peter; Jaumann, Ralf; Josset, Jean-Luc; Paar, Gerhard; Barnes, David

    2010-05-01

    The ExoMars mission has evolved into a joint European-US mission to deliver a trace gas orbiter and a pair of rovers to Mars in 2016 and 2018 respectively. The European rover will carry the Pasteur exobiology payload including the 1.56 kg Panoramic Camera. PanCam will provide multispectral stereo images with 34 deg horizontal field-of-view (580 microrad/pixel) Wide-Angle Cameras (WAC) and (83 microrad/pixel) colour monoscopic "zoom" images with 5 deg horizontal field-of-view High Resolution Camera (HRC). The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage [1]. Integrated with the WACs and HRC into the PanCam optical bench (which helps the instrument meet its planetary protection requirements) is the PanCam interface unit (PIU); which provides image storage, a Spacewire interface to the rover and DC-DC power conversion. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission [2] as well as providing multispectral geological imaging, colour and stereo panoramic images and solar images for water vapour abundance and dust optical depth measurements. The High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls. Additionally HRC will be used to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. In short, PanCam provides the overview and context for the ExoMars experiment locations, required to enable the exobiology aims of the mission. In addition to these baseline capabilities further enhancements are possible to PanCam to enhance it's effectiveness for astrobiology and planetary exploration: 1. Rover Inspection Mirror (RIM) 2. Organics Detection by Fluorescence Excitation (ODFE) LEDs [3-6] 3. UVIS broadband UV Flux and Opacity Determination (UVFOD) photodiode This paper will discuss the scientific objectives and resource impacts of these enhancements. References: 1. Griffiths, A.D., Coates, A.J., Josset, J.-L., Paar, G., Hofmann, B., Pullan, D., Ruffer, P., Sims, M.R., Pillinger, C.T., The Beagle 2 stereo camera system, Planet. Space Sci. 53, 1466-1488, 2005. 2. Paar, G., Oberst, J., Barnes, D.P., Griffiths, A.D., Jaumann, R., Coates, A.J., Muller, J.P., Gao, Y., Li, R., 2007, Requirements and Solutions for ExoMars Rover Panoramic Camera 3d Vision Processing, abstract submitted to EGU meeting, Vienna, 2007. 3. Storrie-Lombardi, M.C., Hug, W.F., McDonald, G.D., Tsapin, A.I., and Nealson, K.H. 2001. Hollow cathode ion lasers for deep ultraviolet Raman spectroscopy and fluorescence imaging. Rev. Sci. Ins., 72 (12), 4452-4459. 4. Nealson, K.H., Tsapin, A., and Storrie-Lombardi, M. 2002. Searching for life in the universe: unconventional methods for an unconventional problem. International Microbiology, 5, 223-230. 5. Mormile, M.R. and Storrie-Lombardi, M.C. 2005. The use of ultraviolet excitation of native fluorescence for identifying biomarkers in halite crystals. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 246-253. 6. Storrie-Lombardi, M.C. 2005. Post-Bayesian strategies to optimize astrobiology instrument suites: lessons from Antarctica and the Pilbara. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 288-301.

  12. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  13. Nicaraguan Volcanoes, 26 February 2000

    NASA Image and Video Library

    2000-04-19

    The true-color image at left is a downward-looking (nadir) view of the area around the San Cristobal volcano, which erupted the previous day. This image is oriented with east at the top and north at the left. The right image is a stereo anaglyph of the same area, created from red band multi-angle data taken by the 45.6-degree aftward and 70.5-degree aftward cameras on the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. View this image through red/blue 3D glasses, with the red filter over the left eye. A plume from San Cristobal (approximately at image center) is much easier to see in the anaglyph, due to 3 effects: the long viewing path through the atmosphere at the oblique angles, the reduced reflection from the underlying water, and the 3D stereoscopic height separation. In this image, the plume floats between the surface and the overlying cumulus clouds. A second plume is also visible in the upper right (southeast of San Cristobal). This very thin plume may originate from the Masaya volcano, which is continually degassing at as low rate. The spatial resolution is 275 meters (300 yards). http://photojournal.jpl.nasa.gov/catalog/PIA02600

  14. Ten-Meter Scale Topography and Roughness of Mars Exploration Rovers Landing Sites and Martian Polar Regions

    NASA Technical Reports Server (NTRS)

    Ivanov, Anton B.

    2003-01-01

    The Mars Orbiter Camera (MOC) has been operating on board of the Mars Global Surveyor (MGS) spacecraft since 1998. It consists of three cameras - Red and Blue Wide Angle cameras (FOV=140 deg.) and Narrow Angle camera (FOV=0.44 deg.). The Wide Angle camera allows surface resolution down to 230 m/pixel and the Narrow Angle camera - down to 1.5 m/pixel. This work is a continuation of the project, which we have reported previously. Since then we have refined and improved our stereo correlation algorithm and have processed many more stereo pairs. We will discuss results of our stereo pair analysis located in the Mars Exploration rovers (MER) landing sites and address feasibility of recovering topography from stereo pairs (especially in the polar regions), taken during MGS 'Relay-16' mode.

  15. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  16. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  17. Aram and Iani Chaos

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-344, 28 April 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image mosaic was constructed from data acquired by the MOC red wide angle camera. The large, circular feature in the upper left is Aram Chaos, an ancient impact crater filled with layered sedimentary rock that was later disrupted and eroded to form a blocky, 'chaotic' appearance. To the southeast of Aram Chaos, in the lower right of this picture, is Iani Chaos. The light-toned patches amid the large blocks of Iani Chaos are known from higher-resolution MOC images to be layered, sedimentary rock outcrops. The picture center is near 0.5oN, 20oW. Sunlight illuminates the scene from the left/upper left.

  18. Microwave transient analyzer

    DOEpatents

    Gallegos, C.H.; Ogle, J.W.; Stokes, J.L.

    1992-11-24

    A method and apparatus for capturing and recording indications of frequency content of electromagnetic signals and radiation is disclosed including a laser light source and a Bragg cell for deflecting a light beam at a plurality of deflection angles dependent upon frequency content of the signal. A streak camera and a microchannel plate intensifier are used to project Bragg cell output onto either a photographic film or a charge coupled device (CCD) imager. Timing markers are provided by a comb generator and a one shot generator, the outputs of which are also routed through the streak camera onto the film or the CCD imager. Using the inventive method, the full range of the output of the Bragg cell can be recorded as a function of time. 5 figs.

  19. Painted Lines on an Ornament

    NASA Image and Video Library

    2013-12-23

    The globe of Saturn, seen here in natural color, is reminiscent of a holiday ornament in this wide-angle view from NASA's Cassini spacecraft. The characteristic hexagonal shape of Saturn's northern jet stream, somewhat yellow here, is visible. At the pole lies a Saturnian version of a high-speed hurricane, eye and all. This view is centered on terrain at 75 degrees north latitude, 120 degrees west longitude. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The images were taken with the Cassini spacecraft wide-angle camera on July 22, 2013. This view was acquired at a distance of approximately 611,000 miles (984,000 kilometers) from Saturn. Image scale is 51 miles (82 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA17175

  20. Dreamy Swirls on Saturn

    NASA Image and Video Library

    2017-09-12

    NASA's Cassini spacecraft gazed toward the northern hemisphere of Saturn to spy subtle, multi-hued bands in the clouds there. This view looks toward the terminator -- the dividing line between night and day -- at lower left. The sun shines at low angles along this boundary, in places highlighting vertical structure in the clouds. Some vertical relief is apparent in this view, with higher clouds casting shadows over those at lower altitude. Images taken with the Cassini spacecraft narrow-angle camera using red, green and blue spectral filters were combined to create this natural-color view. The images were acquired on Aug. 31, 2017, at a distance of approximately 700,000 miles (1.1 million kilometers) from Saturn. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21888

  1. System and method for generating motion corrected tomographic images

    DOEpatents

    Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.

    2012-05-01

    A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.

  2. A Warping Framework for Wide-Angle Imaging and Perspective Manipulation

    ERIC Educational Resources Information Center

    Carroll, Robert E.

    2013-01-01

    Nearly all photographs are created with lenses that approximate an ideal pinhole camera--that is, a perspective projection. This projection has proven useful not only for creating realistic depictions, but also for its expressive flexibility. Beginning in the Renaissance, the notion of perspective gave artists a systematic way to represent…

  3. Remote Sensing of Clouds for Solar Forecasting Applications

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe

    A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.

  4. Telescope and mirrors development for the monolithic silicon carbide instrument of the osiris narrow angle camera

    NASA Astrophysics Data System (ADS)

    Calvel, Bertrand; Castel, Didier; Standarovski, Eric; Rousset, Gérard; Bougoin, Michel

    2017-11-01

    The international Rosetta mission, now planned by ESA to be launched in January 2003, will provide a unique opportunity to directly study the nucleus of comet 46P/Wirtanen and its activity in 2013. We describe here the design, the development and the performances of the telescope of the Narrow Angle Camera of the OSIRIS experiment et its Silicon Carbide telescope which will give high resolution images of the cometary nucleus in the visible spectrum. The development of the mirrors has been specifically detailed. The SiC parts have been manufactured by BOOSTEC, polished by STIGMA OPTIQUE and ion figured by IOM under the prime contractorship of ASTRIUM. ASTRIUM was also in charge of the alignment. The final optical quality of the aligned telescope is 30 nm rms wavefront error.

  5. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  6. Field Test of the ExoMars Panoramic Camera in the High Arctic - First Results and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Schmitz, N.; Barnes, D.; Coates, A.; Griffiths, A.; Hauber, E.; Jaumann, R.; Michaelis, H.; Mosebach, H.; Paar, G.; Reissaus, P.; Trauthan, F.

    2009-04-01

    The ExoMars mission as the first element of the ESA Aurora program is scheduled to be launched to Mars in 2016. Part of the Pasteur Exobiology Payload onboard the ExoMars rover is a Panoramic Camera System (‘PanCam') being designed to obtain high-resolution color and wide-angle multi-spectral stereoscopic panoramic images from the mast of the ExoMars rover. The PanCam instrument consists of two wide-angle cameras (WACs), which will provide multispectral stereo images with 34° field-of-view (FOV) and a High-Resolution RGB Channel (HRC) to provide close-up images with 5° field-of-view. For field testing of the PanCam breadboard in a representative environment the ExoMars PanCam team joined the 6th Arctic Mars Analogue Svalbard Expedition (AMASE) 2008. The expedition took place from 4-17 August 2008 in the Svalbard archipelago, Norway, which is considered to be an excellent site, analogue to ancient Mars. 31 scientists and engineers involved in Mars Exploration (among them the ExoMars WISDOM, MIMA and Raman-LIBS team as well as several NASA MSL teams) combined their knowledge, instruments and techniques to study the geology, geophysics, biosignatures, and life forms that can be found in volcanic complexes, warm springs, subsurface ice, and sedimentary deposits. This work has been carried out by using instruments, a rover (NASA's CliffBot), and techniques that will/may be used in future planetary missions, thereby providing the capability to simulate a full mission environment in a Mars analogue terrain. Besides demonstrating PanCam's general functionality in a field environment, test and verification of the interpretability of PanCam data for in-situ geological context determination and scientific target selection was a main objective. To process the collected data, a first version of the preliminary PanCam 3D reconstruction processing & visualization chain was used. Other objectives included to test and refine the operational scenario (based on ExoMars Rover Reference Surface Mission), to investigate data commonalities and data fusion potential w.r.t. other instruments, and to collect representative image data to evaluate various influences, such as viewing distance, surface structure, and availability of structures at "infinity" (e.g. resolution, focus quality and associated accuracy of the 3D reconstruction). Airborne images with the HRSC-AX camera (airborne camera with heritage from the Mars Express High Resolution Stereo Camera HRSC), collected during a flight campaign over Svalbard in June 2008, provided large-scale geological context information for all field sites.

  7. Efficient structure from motion for oblique UAV images based on maximal spanning tree expansion

    NASA Astrophysics Data System (ADS)

    Jiang, San; Jiang, Wanshou

    2017-10-01

    The primary contribution of this paper is an efficient Structure from Motion (SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an algorithm, considering spatial relationship constraints between image footprints, is designed for match pair selection with the assistance of UAV flight control data and oblique camera mounting angles. Second, a topological connection network (TCN), represented by an undirected weighted graph, is constructed from initial match pairs, which encodes the overlap areas and intersection angles into edge weights. Then, an algorithm, termed MST-Expansion, is proposed to extract the match graph from the TCN, where the TCN is first simplified by a maximum spanning tree (MST). By further analysis of the local structure in the MST, expansion operations are performed on the vertices of the MST for match graph enhancement, which is achieved by introducing critical connections in the expansion directions. Finally, guided by the match graph, an efficient SfM is proposed. Under extensive analysis and comparison, its performance is verified by using three oblique UAV datasets captured with different multi-camera systems. Experimental results demonstrate that the efficiency of image matching is improved, with speedup ratios ranging from 19 to 35, and competitive orientation accuracy is achieved from both relative bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA with GCPs. At the same time, images in the three datasets are successfully oriented. For the orientation of oblique UAV images, the proposed method can be a more efficient solution.

  8. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  9. Development of an imaging system for single droplet characterization using a droplet generator.

    PubMed

    Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D

    2012-01-01

    The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.

  10. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  11. Surveillance Using Multiple Unmanned Aerial Vehicles

    DTIC Science & Technology

    2009-03-01

    BATCAM wingspan was 21” vs Jodeh’s 9.1 ft, the BATCAM’s propulsion was electric vs. Jodeh’s gas engine, cameras were body fixed vs. gimballed, and...3.1: BATCAM Camera FOV Angles Angle Front Camera Side Camera Depression Angle 49◦ 39◦ horizontal FOV 48◦ 48◦ vertical FOV 40◦ 40◦ by a quiet electric ...motor. The batteries can be recharged with a car cigarette lighter in less than an hour. Assembly of the wing airframe takes less than a minute, and

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceglio, N.M.; George, E.V.; Brooks, K.M.

    The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less

  13. Pre-hibernation performances of the OSIRIS cameras onboard the Rosetta spacecraft

    NASA Astrophysics Data System (ADS)

    Magrin, S.; La Forgia, F.; Da Deppo, V.; Lazzarin, M.; Bertini, I.; Ferri, F.; Pajola, M.; Barbieri, M.; Naletto, G.; Barbieri, C.; Tubiana, C.; Küppers, M.; Fornasier, S.; Jorda, L.; Sierks, H.

    2015-02-01

    Context. The ESA cometary mission Rosetta was launched in 2004. In the past years and until the spacecraft hibernation in June 2011, the two cameras of the OSIRIS imaging system (Narrow Angle and Wide Angle Camera, NAC and WAC) observed many different sources. On 20 January 2014 the spacecraft successfully exited hibernation to start observing the primary scientific target of the mission, comet 67P/Churyumov-Gerasimenko. Aims: A study of the past performances of the cameras is now mandatory to be able to determine whether the system has been stable through the time and to derive, if necessary, additional analysis methods for the future precise calibration of the cometary data. Methods: The instrumental responses and filter passbands were used to estimate the efficiency of the system. A comparison with acquired images of specific calibration stars was made, and a refined photometric calibration was computed, both for the absolute flux and for the reflectivity of small bodies of the solar system. Results: We found a stability of the instrumental performances within ±1.5% from 2007 to 2010, with no evidence of an aging effect on the optics or detectors. The efficiency of the instrumentation is found to be as expected in the visible range, but lower than expected in the UV and IR range. A photometric calibration implementation was discussed for the two cameras. Conclusions: The calibration derived from pre-hibernation phases of the mission will be checked as soon as possible after the awakening of OSIRIS and will be continuously monitored until the end of the mission in December 2015. A list of additional calibration sources has been determined that are to be observed during the forthcoming phases of the mission to ensure a better coverage across the wavelength range of the cameras and to study the possible dust contamination of the optics.

  14. Extraction and analysis of the image in the sight field of comparison goniometer to measure IR mirrors assembly

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-shan; Zhao, Yue-jin; Li, Zhuo; Dong, Liquan; Chu, Xuhong; Li, Ping

    2010-11-01

    The comparison goniometer is widely used to measure and inspect small angle, angle difference, and parallelism of two surfaces. However, the common manner to read a comparison goniometer is to inspect the ocular of the goniometer by one eye of the operator. To read an old goniometer that just equips with one adjustable ocular is a difficult work. In the fabrication of an IR reflecting mirrors assembly, a common comparison goniometer is used to measure the angle errors between two neighbor assembled mirrors. In this paper, a quick reading technique image-based for the comparison goniometer used to inspect the parallelism of mirrors in a mirrors assembly is proposed. One digital camera, one comparison goniometer and one set of computer are used to construct a reading system, the image of the sight field in the comparison goniometer will be extracted and recognized to get the angle positions of the reflection surfaces to be measured. In order to obtain the interval distance between the scale lines, a particular technique, left peak first method, based on the local peak values of intensity in the true color image is proposed. A program written in VC++6.0 has been developed to perform the color digital image processing.

  15. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control.

    PubMed

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-07-08

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).

  16. Stereo View of Martian Rock Target 'Funzie'

    NASA Image and Video Library

    2018-02-08

    The surface of the Martian rock target in this stereo image includes small hollows with a "swallowtail" shape characteristic of some gypsum crystals, most evident in the lower left quadrant. These hollows may have resulted from the original crystallizing mineral subsequently dissolving away. The view appears three-dimensional when seen through blue-red glasses with the red lens on the left. The scene spans about 2.5 inches (6.5 centimeters). This rock target, called "Funzie," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The stereo view combines two images taken from slightly different angles by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, with the camera about 4 inches (10 centimeters) above the target. Fig. 1 and Fig. 2 are the separate "right-eye" and "left-eye" images, taken on Jan. 11, 2018, during the 1,932nd Martian day, or sol, of the rover's work on Mars. Right-eye and left-eye images are available at https://photojournal.jpl.nasa.gov/catalog/PIA22212

  17. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  18. Using Lunar Module Shadows To Scale the Effects of Rocket Exhaust Plumes

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Excavating granular materials beneath a vertical jet of gas involves several physical mechanisms. These occur, for example, beneath the exhaust plume of a rocket landing on the soil of the Moon or Mars. We performed a series of experiments and simulations (Figure 1) to provide a detailed view of the complex gas-soil interactions. Measurements taken from the Apollo lunar landing videos (Figure 2) and from photographs of the resulting terrain helped demonstrate how the interactions extrapolate into the lunar environment. It is important to understand these processes at a fundamental level to support the ongoing design of higher fidelity numerical simulations and larger-scale experiments. These are needed to enable future lunar exploration wherein multiple hardware assets will be placed on the Moon within short distances of one another. The high-velocity spray of soil from the landing spacecraft must be accurately predicted and controlled or it could erode the surfaces of nearby hardware. This analysis indicated that the lunar dust is ejected at an angle of less than 3 degrees above the surface, the results of which can be mitigated by a modest berm of lunar soil. These results assume that future lunar landers will use a single engine. The analysis would need to be adjusted for a multiengine lander. Figure 3 is a detailed schematic of the Lunar Module camera calibration math model. In this chart, formulas relating the known quantities, such as sun angle and Lunar Module dimensions, to the unknown quantities are depicted. The camera angle PSI is determined by measurement of the imaged aspect ratio of a crater, where the crater is assumed to be circular. The final solution is the determination of the camera calibration factor, alpha. Figure 4 is a detailed schematic of the dust angle math model, which again relates known to unknown parameters. The known parameters now include the camera calibration factor and Lunar Module dimensions. The final computation is the ejected dust angle, as a function of Lunar Module altitude.

  19. Public-Requested Mars Image: Crater on Pavonis Mons

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-481, 12 September 2003

    This image is in the first pair obtained in the Public Target Request program, which accepts suggestions for sites to photograph with the Mars Orbiter Camera on NASA's Mars Global Surveyor spacecraft.

    It is a narrow-angle (high-resolution) view of a portion of the lower wall and floor of the caldera at the top of a martian volcano named Pavonis Mons. A companion picture is a wide-angle context image, taken at the same time as the high-resolution view. The white box in the context frame shows the location of the high-resolution picture.

    [figure removed for brevity, see original site]

    Pavonis Mons is a broad shield volcano. Its summit region is about 14 kilometers (8.7 miles) above the martian datum (zero-elevation reference level). The caldera is about 4.6 kilometers (2.8 miles) deep. The caldera formed by collapse--long ago--as molten rock withdrew to greater depths within the volcano. The high-resolution picture shows that today the floor and walls of this caldera are covered by a thick, textured mantle of dust, perhaps more than 1 meter (1 yard) deep. Larger boulders and rock outcroppings poke out from within this dust mantle. They are seen as small, dark dots and mounds on the lower slopes of the wall in the high-resolution image.

    The narrow-angle Mars Orbiter Camera image has a resolution of 1.5 meters (about 5 feet) per pixel and covers an area 1.5 kilometers (0.9 mile) wide by 9 kilometers (5.6 miles) long. The context image, covering much of the summit region of Pavonis Mons, is about 115 kilometers (72 miles) wide. Sunlight illuminates both images from the lower left; north is toward the upper right; east to the right. The high-resolution view is located near 0.4 degrees north latitude, 112.8 degrees west longitude.

  20. Earth Observations taken by the Expedition 22 Crew

    NASA Image and Video Library

    2009-12-01

    ISS022-E-005258 (1 Dec. 2009) --- This detailed hand-held digital camera?s image recorded from the International Space Station highlights sand dunes in the Fachi-Bilma erg, or sand sea, which is part of the central eastern Tenere Desert. The Tenere occupies much of southeastern Niger and is considered to be part of the larger Sahara Desert that stretches across northern Africa. Much of the Sahara is comprised of ergs ? with an area of approximately 150,000 square kilometers, the Fachi-Bilma is one of the larger sand seas. Two major types of dunes are visible in the image. Large, roughly north-south oriented transverse dunes fill the image frame. This type of dune tends to form at roughly right angles to the dominant northeasterly winds. The dune crests are marked in this image by darker, steeper sand accumulations that cast shadows. The lighter-toned zones between are lower interdune ?flats?. The large dunes appear to be highly symmetrical with regard to their crests. This suggests that the crest sediments are coarser, preventing the formation of a steeper slip face on the downwind side of the dune by wind-driven motion of similarly-sized sand grains. According to NASA scientists, this particular form of transverse dune is known as a zibar, and is thought to form by winnowing of smaller sand grains by the wind, leaving the coarser grains to form dune crests. A second set of thin linear dunes oriented at roughly right angles to the zibar dunes appears to be formed on the larger landforms and is therefore a younger landscape feature. These dunes appear to be forming from finer grains in the same wind field as the larger zibars. The image was taken with digital still camera fitted with a 400 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center.

  1. Mars Global Surveyor: 7 Years in Orbit!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    12 September 2004 Today, 12 September 2004, the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) team celebrates 7 Earth years orbiting Mars. MGS first reached the red planet and performed its critical orbit insertion burn on 12 September 1997. Over the past 7 years, MOC has returned over 170,000 images; its narrow angle camera has covered about 4.5% of the surface, and its wide angle cameras have viewed 100% of the planet nearly everyday.

    At this time, MOC is not acquiring data because Mars is on the other side of the Sun relative to Earth. This period, known as Solar Conjunction, occurs about once every 26 months. During Solar Conjunction, no radio communications from spacecraft that are orbiting or have landed on Mars can be received. MOC was turned off on 7 September and is expected to resume operations on 25 September 2004, when Mars re-emerges from behind the Sun.

    The rotating color image of Mars shown here was compiled from MOC red and blue wide angle daily global images acquired exactly 1 Mars year ago on 26 October 2002 (Ls 86.4o). In other words, Mars today (12 September 2004) should look about the same as the view provided here. Presently, Mars is in very late northern spring, and the north polar cap has retreated almost to its summer configuration. Water ice clouds form each afternoon at this time of year over the large volcanoes in the Tharsis and Elysium regions. A discontinuous belt of clouds forms over the martian equator; it is most prominent north of the Valles Marineris trough system. In the southern hemisphere, it is late autumn and the giant Hellas Basin floor is nearly white with seasonal frost cover. The south polar cap is not visible, it is enveloped in seasonal darkness. The northern summer and southern winter seasons will begin on 20 September 2004.

  2. Sublimation of icy aggregates in the coma of comet 67P/Churyumov-Gerasimenko detected with the OSIRIS cameras on board Rosetta

    NASA Astrophysics Data System (ADS)

    Gicquel, A.; Vincent, J.-B.; Agarwal, J.; A'Hearn, M. F.; Bertini, I.; Bodewits, D.; Sierks, H.; Lin, Z.-Y.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; Deller, J.; De Cecco, M.; Frattin, E.; El-Maarry, M. R.; Fornasier, S.; Fulle, M.; Groussin, O.; Gutiérrez, P. J.; Gutiérrez-Marquez, P.; Güttler, C.; Höfner, S.; Hofmann, M.; Hu, X.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kovacs, G.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Moreno, J. J. Lopez; Lowry, S.; Marzari, F.; Masoumzadeh, N.; Massironi, M.; Moreno, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Pommerol, A.; Preusker, F.; Scholten, F.; Shi, X.; Thomas, N.; Toth, I.; Tubiana, C.

    2016-11-01

    Beginning in 2014 March, the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras began capturing images of the nucleus and coma (gas and dust) of comet 67P/Churyumov-Gerasimenko using both the wide angle camera (WAC) and the narrow angle camera (NAC). The many observations taken since July of 2014 have been used to study the morphology, location, and temporal variation of the comet's dust jets. We analysed the dust monitoring observations shortly after the southern vernal equinox on 2015 May 30 and 31 with the WAC at the heliocentric distance Rh = 1.53 AU, where it is possible to observe that the jet rotates with the nucleus. We found that the decline of brightness as a function of the distance of the jet is much steeper than the background coma, which is a first indication of sublimation. We adapted a model of sublimation of icy aggregates and studied the effect as a function of the physical properties of the aggregates (composition and size). The major finding of this paper was that through the sublimation of the aggregates of dirty grains (radius a between 5 and 50 μm) we were able to completely reproduce the radial brightness profile of a jet beyond 4 km from the nucleus. To reproduce the data, we needed to inject a number of aggregates between 8.5 × 1013 and 8.5 × 1010 for a = 5 and 50 μm, respectively, or an initial mass of H2O ice around 22 kg.

  3. Recent Mastcam and MAHLI Visible/Near-Infrared Spectrophotometric Observations: Pahrump Hills to Marias Pass

    NASA Astrophysics Data System (ADS)

    Johnson, J. R.; Bell, J. F., III; Hayes, A.; Deen, R. G.; Godber, A.; Arvidson, R. E.; Lemmon, M. T.

    2015-12-01

    The Mastcam imaging system on the Curiosity rover continued acquisition of multispectral images of the same terrain at multiple times of day at three new rover locations between sols 872 and 1003. These data sets will be used to investigate the light scattering properties of rocks and soils along the Curiosity traverse using radiative transfer models. Images were acquired by the Mastcam-34 (M-34) camera on Sols 872-892 at 8 times of day (Mojave drill location), Sols 914-917 (Telegraph Peak drill location) at 9 times of day, and Sols 1000-1003 at 8 times of day (Stimson-Murray Formation contact near Marias Pass). Data sets were acquired using filters centered at 445, 527, 751, and 1012 nm, and the images were jpeg-compressed. Data sets typically were pointed ~east and ~west to provide phase angle coverage from near 0° to 125-140° for a variety of rocks and soils. Also acquired on Sols 917-918 at the Telegraph Peak site was a multiple time-of-day Mastcam sequence pointed southeast using only the broadband Bayer filters that provided losslessly compressed images with phase angles ~55-129°. Navcam stereo images were also acquired with each data set to provide broadband photometry and terrain measurements for computing surface normals and local incidence and emission angles used in photometric modeling. On Sol 1028, the MAHLI camera was used as a goniometer to acquire images at 20 arm positions, all centered at the same location within the work volume from a near-constant distance of 85 cm from the surface. Although this experiment was run at only one time of day (~15:30 LTST), it provided phase angle coverage from ~30° to ~111°. The terrain included the contact between the uppermost portion of the Murray Formation and the Stimson sandstones, and was the first acquisition of both Mastcam and MALHI photometry images at the same rover location. The MAHLI images also allowed construction of a 3D shape model of the Stimson-Murray contact region. The attached figure shows a phase color composite of the western Stimson area, created using phase angles of 8°, 78°, and 130° at 751 nm. The red areas correspond to highly backscattering materials that appear to concentrate along linear fractures throughout this area. The blue areas correspond to more forward scattering materials dispersed through the stratigraphic sequence.

  4. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  5. Techniques for Transition and Surface Temperature Measurements on Projectiles at Hypersonic Velocities- A Status Report

    NASA Technical Reports Server (NTRS)

    Wilder, M. C.; Bogdanoff, D. W.

    2005-01-01

    A research effort to advance techniques for determining transition location and measuring surface temperatures on graphite-tipped projectiles in hypersonic flight in a ballistic range is described. Projectiles were launched at muzzle velocities of approx. 4.7 km/sec into air at pressures of 190-570 Torr. Most launches had maximum pitch and yaw angles of 2.5-5 degrees at pressures of 380 Torr and above and 3-6 degrees at pressures of 190-380 Torr. Arcjet-ablated and machined, bead-blasted projectiles were launched; special cleaning techniques had to be developed for the latter class of projectiles. Improved methods of using helium to remove the radiating gas cap around the projectiles at the locations where ICCD (intensified charge coupled device) camera images were taken are described. Two ICCD cameras with a wavelength sensitivity range of 480-870 nm have been used in this program for several years to obtain images. In the last year, a third camera, with a wavelength sensitivity range of 1.5-5 microns [in the infrared (IR)], has been added. ICCD and IR camera images of hemisphere nose and 70 degree sphere-cone nose projectiles at velocities of 4.0-4.7 km/sec are presented. The ICCD images clearly show a region of steep temperature rise indicative of transition from laminar to turbulent flow. Preliminary temperature data for the graphite projectile noses are presented.

  6. Development of a software-hardware complex for studying the process of grinding by a pendulum deformer

    NASA Astrophysics Data System (ADS)

    Borisov, A. P.

    2018-01-01

    The article is devoted to the development of a software and hardware complex for investigating the grinding process on a pendulum deformer. The hardware part of this complex is the Raspberry Pi model 2B platform, to which a contactless angle sensor is connected, which allows to obtain data on the angle of deviation of the pendulum surface, usb-cameras, which allow to obtain grain images before and after grinding, and stepping motors allowing lifting of the pendulum surface and adjust the clearance between the pendulum and the supporting surfaces. The program part of the complex is written in C # and allows receiving data from the sensor and usb-cameras, processing the received data, and also controlling the synchronous-step motors in manual and automatic mode. The conducted studies show that the rational mode is the deviation of the pendulum surface by an angle of 400, and the location of the grain in the central zone of the support surface, regardless of the orientation of the grain in space. Also, due to the non-contact angle sensor, energy consumption for grinding, speed and acceleration of the pendulum surface, as well as vitreousness of grain and the energy consumption are calculated. With the help of photographs obtained from usb cameras, the work of a pendulum deformer based on the Rebinder formula and calculation of the grain area before and after grinding is determined.

  7. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  8. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  9. Cassini Camera Contamination Anomaly: Experiences and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Haemmerle, Vance R.; Gerhard, James H.

    2006-01-01

    We discuss the contamination 'Haze' anomaly for the Cassini Narrow Angle Camera (NAC), one of two optical telescopes that comprise the Imaging Science Subsystem (ISS). Cassini is a Saturn Orbiter with a 4-year nominal mission. The incident occurred in 2001, five months after Jupiter encounter during the Cruise phase and ironically at the resumption of planned maintenance decontamination cycles. The degraded optical performance was first identified by the Instrument Operations Team with the first ISS Saturn imaging six weeks later. A distinct haze of varying size from image to image marred the images of Saturn. A photometric star calibration of the Pleiades, 4 days after the incident, showed stars with halos. Analysis showed that while the halo's intensity was only 1 - 2% of the intensity of the central peak of a star, the halo contained 30 - 70% of its integrated flux. This condition would impact science return. In a review of our experiences, we examine the contamination control plan, discuss the analysis of the limited data available and describe the one-year campaign to remove the haze from the camera. After several long conservative heating activities and interim analysis of their results, the contamination problem as measured by the camera's point spread function was essentially back to preanomaly size and at a point where there would be more risk to continue. We stress the importance of the flexibility of operations and instrument design, the need to do early infight instrument calibration and continual monitoring of instrument performance.

  10. MISR Global Images See the Light of Day

    NASA Technical Reports Server (NTRS)

    2002-01-01

    As of July 31, 2002, global multi-angle, multi-spectral radiance products are available from the MISR instrument aboard the Terra satellite. Measuring the radiative properties of different types of surfaces, clouds and atmospheric particulates is an important step toward understanding the Earth's climate system. These images are among the first planet-wide summary views to be publicly released from the Multi-angle Imaging SpectroRadiometer experiment. Data for these images were collected during the month of March 2002, and each pixel represents monthly-averaged daylight radiances from an area measuring 1/2 degree in latitude by 1/2 degree in longitude.

    The top panel is from MISR's nadir (vertical-viewing) camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines near-infrared, red, and green spectral data to create a false-color rendition that enhances highly vegetated terrain. It takes 9 days for MISR to view the entire globe, and only areas within 8 degrees of latitude of the north and south poles are not observed due to the Terra orbit inclination. Because a single pole-to-pole swath of MISR data is just 400 kilometers wide, multiple swaths must be mosaiced to create these global views. Discontinuities appear in some cloud patterns as a consequence of changes in cloud cover from one day to another.

    The lower panel is a composite in which red, green, and blue radiances from MISR's 70-degree forward-viewing camera are displayed in the northern hemisphere, and radiances from the 70-degree backward-viewing camera are displayed in the southern hemisphere. At the March equinox (spring in the northern hemisphere, autumn in the southern hemisphere), the Sun is near the equator. Therefore, both oblique angles are observing the Earth in 'forward scattering', particularly at high latitudes. Forward scattering occurs when you (or MISR) observe an object with the Sun at a point in the sky that is in front of you. Relative to the nadir view, this geometry accentuates the appearance of polar clouds, and can even reveal clouds that are invisible in the nadir direction. In relatively clear ocean areas, the oblique-angle composite is generally brighter than its nadir counterpart due to enhanced reflection of light by atmospheric particulates.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  11. Craters 'Twixt Day and Night

    NASA Image and Video Library

    2004-12-20

    Three sizeable impact craters, including one with a marked central peak, lie along the line that divides day and night on the Saturnian moon, Dione (dee-OH-nee), which is 1,118 kilometers, or 695 miles across. The low angle of the Sun along the terminator, as this dividing line is called, brings details like these craters into sharp relief. This view shows principally the leading hemisphere of Dione. Some of this moon's bright, wispy streaks can be seen curling around its eastern limb. Cassini imaged the wispy terrain at high resolution during its first Dione flyby on Dec. 14, 2004. This image was taken in visible light with the Cassini spacecraft narrow angle camera on Nov. 1, 2004, at a distance of 2.4 million kilometers (1.5 million miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 106 degrees. North is up. The image scale is 14 kilometers (8.7 miles) per pixel. The image has been magnified by a factor of two and contrast-enhanced to aid visibility of surface features. http://photojournal.jpl.nasa.gov/catalog/PIA06542

  12. The W. M. Keck Observatory Infrared Vortex Coronagraph and a First Image of HIP 79124 B

    NASA Astrophysics Data System (ADS)

    Serabyn, E.; Huby, E.; Matthews, K.; Mawet, D.; Absil, O.; Femenia, B.; Wizinowich, P.; Karlsson, M.; Bottom, M.; Campbell, R.; Carlomagno, B.; Defrère, D.; Delacroix, C.; Forsberg, P.; Gomez Gonzalez, C.; Habraken, S.; Jolivet, A.; Liewer, K.; Lilley, S.; Piron, P.; Reggiani, M.; Surdej, J.; Tran, H.; Vargas Catalán, E.; Wertz, O.

    2017-01-01

    An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L‧-band observational mode is described, and an initial demonstration of the new capability is presented: a resolved image of the low-mass companion to HIP 79124, which had previously been detected by means of interferometry. With HIP 79124 B at a projected separation of 186.5 mas, both the small inner working angle of the vortex coronagraph and the related imaging improvements were crucial in imaging this close companion directly. Due to higher Strehl ratios and more relaxed contrasts in L‧ band versus H band, this new coronagraphic capability will enable high-contrast, small-angle observations of nearby young exoplanets and disks on a par with those of shorter-wavelength extreme adaptive optics coronagraphs.

  13. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  14. SU-E-J-134: An Augmented-Reality Optical Imaging System for Accurate Breast Positioning During Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Malhotra, H; French, S

    Purpose: Breast radiotherapy, particularly electronic compensation, may involve large dose gradients and difficult patient positioning problems. We have developed a simple self-calibrating augmented-reality system, which assists in accurately and reproducibly positioning the patient, by displaying her live image from a single camera superimposed on the correct perspective projection of her 3D CT data. Our method requires only a standard digital camera capable of live-view mode, installed in the treatment suite at an approximately-known orientation and position (rotation R; translation T). Methods: A 10-sphere calibration jig was constructed and CT imaged to provide a 3D model. The (R,T) relating the cameramore » to the CT coordinate system were determined by acquiring a photograph of the jig and optimizing an objective function, which compares the true image points to points calculated with a given candidate R and T geometry. Using this geometric information, 3D CT patient data, viewed from the camera's perspective, is plotted using a Matlab routine. This image data is superimposed onto the real-time patient image, acquired by the camera, and displayed using standard live-view software. This enables the therapists to view both the patient's current and desired positions, and guide the patient into assuming the correct position. The method was evaluated using an in-house developed bolus-like breast phantom, mounted on a supporting platform, which could be tilted at various angles to simulate treatment-like geometries. Results: Our system allowed breast phantom alignment, with an accuracy of about 0.5 cm and 1 ± 0.5 degree. Better resolution could be possible using a camera with higher-zoom capabilities. Conclusion: We have developed an augmented-reality system, which combines a perspective projection of a CT image with a patient's real-time optical image. This system has the potential to improve patient setup accuracy during breast radiotherapy, and could possibly be used for other disease sites as well.« less

  15. Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed; Wu, Chensheng; Rzasa, John; Davis, Christopher C.

    2012-10-01

    Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as "light field" or "plenoptic" cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.

  16. Application of classification methods for mapping Mercury's surface composition: analysis on Rudaki's Area

    NASA Astrophysics Data System (ADS)

    Zambon, F.; De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Carli, C.; Ammanito, E.; Friggeri, A.

    2011-10-01

    During the first two MESSENGER flybys (14th January 2008 and 6th October 2008) the Mercury Dual Imaging System (MDIS) has extended the coverage of the Mercury surface, obtained by Mariner 10 and now we have images of about 90% of the Mercury surface [1]. MDIS is equipped with a Narrow Angle Camera (NAC) and a Wide Angle Camera (WAC). The NAC uses an off-axis reflective design with a 1.5° field of view (FOV) centered at 747 nm. The WAC has a re- fractive design with a 10.5° FOV and 12-position filters that cover a 395-1040 nm spectral range [2]. The color images can be used to infer information on the surface composition and classification meth- ods are an interesting technique for multispectral image analysis which can be applied to the study of the planetary surfaces. Classification methods are based on clustering algorithms and they can be divided in two categories: unsupervised and supervised. The unsupervised classifiers do not require the analyst feedback, and the algorithm automatically organizes pixels values into classes. In the supervised method, instead, the analyst must choose the "training area" that define the pixels value of a given class [3]. Here we will describe the classification in different compositional units of the region near the Rudaki Crater on Mercury.

  17. 67P/Churyumov-Gerasimenko: Activity between March and June 2014 as observed from Rosetta/OSIRIS

    NASA Astrophysics Data System (ADS)

    Tubiana, C.; Snodgrass, C.; Bertini, I.; Mottola, S.; Vincent, J.-B.; Lara, L.; Fornasier, S.; Knollenberg, J.; Thomas, N.; Fulle, M.; Agarwal, J.; Bodewits, D.; Ferri, F.; Güttler, C.; Gutierrez, P. J.; La Forgia, F.; Lowry, S.; Magrin, S.; Oklay, N.; Pajola, M.; Rodrigo, R.; Sierks, H.; A'Hearn, M. F.; Angrilli, F.; Barbieri, C.; Barucci, M. A.; Bertaux, J.-L.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; De Cecco, M.; Debei, S.; Groussin, O.; Hviid, S. F.; Ip, W.; Jorda, L.; Keller, H. U.; Koschny, D.; Kramm, R.; Kührt, E.; Küppers, M.; Lazzarin, M.; Lamy, P. L.; Lopez Moreno, J. J.; Marzari, F.; Michalik, H.; Naletto, G.; Rickman, H.; Sabau, L.; Wenzel, K.-P.

    2015-01-01

    Aims: 67P/Churyumov-Gerasimenko is the target comet of the ESA's Rosetta mission. After commissioning at the end of March 2014, the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) onboard Rosetta, started imaging the comet and its dust environment to investigate how they change and evolve while approaching the Sun. Methods: We focused our work on Narrow Angle Camera (NAC) orange images and Wide Angle Camera (WAC) red and visible-610 images acquired between 2014 March 23 and June 24 when the nucleus of 67P was unresolved and moving from approximately 4.3 AU to 3.8 AU inbound. During this period the 67P - Rosetta distance decreased from 5 million to 120 thousand km. Results: Through aperture photometry, we investigated how the comet brightness varies with heliocentric distance. 67P was likely already weakly active at the end of March 2014, with excess flux above that expected for the nucleus. The comet's brightness was mostly constant during the three months of approach observations, apart from one outburst that occurred around April 30 and a second increase in flux after June 20. Coma was resolved in the profiles from mid-April. Analysis of the coma morphology suggests that most of the activity comes from a source towards the celestial north pole of the comet, but the outburst that occurred on April 30 released material in a different direction.

  18. Characterization of fast deuterons involved in the production of fusion neutrons in a dense plasma focus

    NASA Astrophysics Data System (ADS)

    Kubes, P.; Paduch, M.; Sadowski, M. J.; Cikhardt, J.; Cikhardtova, B.; Klir, D.; Kravarik, J.; Munzar, V.; Rezac, K.; Zielinska, E.; Skladnik-Sadowska, E.; Szymaszek, A.; Tomaszewski, K.; Zaloga, D.

    2018-01-01

    This paper considers regions of a fast deuteron production in a correlation with an evolution of ordered structures inside a pinch column of a mega-ampere plasma focus discharge. Ion pinhole cameras equipped with plastic PM-355 track-detectors recorded fast deuterons escaping in the downstream and other directions (up to 60° to the z-axis). Time-integrated ion images made it possible to estimate sources of the deuteron acceleration at the known magnetic field and deuteron energy values. The images of the fast deuterons emitted in the solid angle ranging from 0° to 4° showed two forms: central spots and circular images. The spots of 1-2 cm in diameter were produced by deuterons from the central pinch regions. The circular-shaped images of a radius above 3 cm (or their parts) were formed by deuterons from the region surrounding the dense pinch column. The ion pinhole cameras placed at angles above 20° to the z-axis recorded the ion spots only, and the ring-images were missing. The central region of the deuteron acceleration could be associated mainly with plasmoids, and the circular images could be connected with ring-shaped regions of the radius corresponding to tops of the plasma lobules outside the dense pinch column. The deuteron tracks forming ring-shaped images of a smaller (0.5-1) cm radius could be produced by deflections of the fast deuterons, which were caused by a magnetic field inside the dense pinch column.

  19. Casting Light and Shadows on a Saharan Dust Storm

    NASA Technical Reports Server (NTRS)

    2003-01-01

    On March 2, 2003, near-surface winds carried a large amount of Saharan dust aloft and transported the material westward over the Atlantic Ocean. These observations from the Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite depict an area near the Cape Verde Islands (situated about 700 kilometers off of Africa's western coast) and provide images of the dust plume along with measurements of its height and motion. Tracking the three-dimensional extent and motion of air masses containing dust or other types of aerosols provides data that can be used to verify and improve computer simulations of particulate transport over large distances, with application to enhancing our understanding of the effects of such particles on meteorology, ocean biological productivity, and human health.

    MISR images the Earth by measuring the spatial patterns of reflected sunlight. In the upper panel of the still image pair, the observations are displayed as a natural-color snapshot from MISR's vertical-viewing (nadir) camera. High-altitude cirrus clouds cast shadows on the underlying ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated stereoscopic processing of MISR's multi-angle imagery show the cirrus clouds (yellow areas) to be situated about 12 kilometers above sea level. The distinctive spatial patterns of these clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. For most of the dust layer, which is spatially much more homogeneous, the stereoscopic approach was unable to retrieve elevation data. However, the edges of shadows cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of the dust layer's height, and indicate that the top of layer is only about 2.5 kilometers above sea level.

    Motion of the dust and clouds is directly observable with the assistance of the multi-angle 'fly-over' animation (Below). The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with 70-degree backward image. Much of the south-to-north shift in the position of the clouds is due to geometric parallax between the nine view angles (rather than true motion), whereas the west-to-east motion is due to actual motion of the clouds over the seven minutes during which all nine cameras observed the scene. MISR's automated data processing retrieved a primarily westerly (eastward) motion of these clouds with speeds of 30-40 meters per second. Note that there is much less geometric parallax for the cloud shadows owing to the relatively low altitude of the dust layer upon which the shadows are cast (the amount of parallax is proportional to elevation and a feature at the surface would have no geometric parallax at all); however, the westerly motion of the shadows matches the actual motion of the clouds. The automated processing was not able to resolve a velocity for the dust plume, but by manually tracking dust features within the plume images that comprise the animation sequence we can derive an easterly (westward) speed of about 16 meters per second. These analyses and visualizations of the MISR data demonstrate that not only are the cirrus clouds and dust separated significantly in elevation, but they exist in completely different wind regimes, with the clouds moving toward the east and the dust moving toward the west.

    [figure removed for brevity, see original site]

    (Click on image above for high resolution version)

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17040. The panels cover an area of about 312 kilometers x 242 kilometers, and use data from blocks 74 to 77 within World Reference System-2 path 207.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  20. Lone Propeller

    NASA Image and Video Library

    2017-09-15

    This view of Saturn's A ring features a lone "propeller" -- one of many such features created by small moonlets embedded in the rings as they attempt, unsuccessfully, to open gaps in the ring material. The image was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back to Earth. The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 420,000 miles (676,000 kilometers) from Saturn. Image scale is 2.3 miles (3.7 kilometers). https://photojournal.jpl.nasa.gov/catalog/PIA21894

  1. Seeing the Storm

    NASA Image and Video Library

    2007-03-08

    This beautiful look at Saturn's south polar atmosphere shows the hurricane-like polar storm swirling there. Sunlight highlights its high cloud walls, especially around the 10 o'clock position. The image was taken with the Cassini spacecraft wide-angle camera using a spectral filter sensitive to wavelengths of infrared light centered at 939 nanometers. The image was taken on Jan. 30, 2007 at a distance of approximately 1.1 million kilometers (700,000 miles) from Saturn. Image scale is 61 kilometers (38 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA08892

  2. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  3. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE PAGES

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    2016-08-15

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  4. Single Still Image

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This narrow angle image taken by Cassini's camera system of the Moon is one of the best of a sequence of narrow angle frames taken as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The 80 millisecond exposure was taken through a spectral filter centered at 0.33 microns; the filter bandpass was 85 Angstroms wide. The spatial scale of the image is about 1.4 miles per pixel (about 2.3 kilometers). The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsutani, Takaomi; Taya, Masaki; Ikuta, Takashi

    A parallel image detection system using an annular pupil for electron optics were developed to realize an increase in the depth of focus, aberration-free imaging and separation of amplitude and phase images under scanning transmission electron microscopy (STEM). Apertures for annular pupils able to suppress high-energy electron scattering were developed using a focused ion beam (FIB) technique. The annular apertures were designed with outer diameter of oe 40 {mu}m and inner diameter of oe32 {mu}m. A taper angle varying from 20 deg. to 1 deg. was applied to the slits of the annular apertures to suppress the influence of high-energymore » electron scattering. Each azimuth angle image on scintillator was detected by a multi-anode photomultiplier tube assembly through 40 optical fibers bundled in a ring shape. To focus the image appearing on the scintillator on optical fibers, an optical lens relay system attached with CCD camera was developed. The system enables the taking of 40 images simultaneously from different scattered directions.« less

  6. Digital Astronaut Photography: A Discovery Dataset for Archaeology

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.

    2010-01-01

    Astronaut photography acquired from the International Space Station (ISS) using commercial off-the-shelf cameras offers a freely-accessible source for high to very high resolution (4-20 m/pixel) visible-wavelength digital data of Earth. Since ISS Expedition 1 in 2000, over 373,000 images of the Earth-Moon system (including land surface, ocean, atmospheric, and lunar images) have been added to the Gateway to Astronaut Photography of Earth online database (http://eol.jsc.nasa.gov ). Handheld astronaut photographs vary in look angle, time of acquisition, solar illumination, and spatial resolution. These attributes of digital astronaut photography result from a unique combination of ISS orbital dynamics, mission operations, camera systems, and the individual skills of the astronaut. The variable nature of astronaut photography makes the dataset uniquely useful for archaeological applications in comparison with more traditional nadir-viewing multispectral datasets acquired from unmanned orbital platforms. For example, surface features such as trenches, walls, ruins, urban patterns, and vegetation clearing and regrowth patterns may be accentuated by low sun angles and oblique viewing conditions (Fig. 1). High spatial resolution digital astronaut photographs can also be used with sophisticated land cover classification and spatial analysis approaches like Object Based Image Analysis, increasing the potential for use in archaeological characterization of landscapes and specific sites.

  7. Tethys Eyes Saturn

    NASA Image and Video Library

    2015-06-15

    The two large craters on Tethys, near the line where day fades to night, almost resemble two giant eyes observing Saturn. The location of these craters on Tethys' terminator throws their topography into sharp relief. Both are large craters, but the larger and southernmost of the two shows a more complex structure. The angle of the lighting highlights a central peak in this crater. Central peaks are the result of the surface reacting to the violent post-impact excavation of the crater. The northern crater does not show a similar feature. Possibly the impact was too small to form a central peak, or the composition of the material in the immediate vicinity couldn't support the formation of a central peak. In this image Tethys is significantly closer to the camera, while the planet is in the background. Yet the moon is still utterly dwarfed by the giant Saturn. This view looks toward the anti-Saturn side of Tethys. North on Tethys is up and rotated 42 degrees to the right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on April 11, 2015. The view was obtained at a distance of approximately 75,000 miles (120,000 kilometers) from Tethys. Image scale at Tethys is 4 miles (7 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18318

  8. Estimation of melanin content in iris of human eye: prognosis for glaucoma diagnostics

    NASA Astrophysics Data System (ADS)

    Bashkatov, Alexey N.; Koblova, Ekaterina V.; Genina, Elina A.; Kamenskikh, Tatyana G.; Dolotov, Leonid E.; Sinichkin, Yury P.; Tuchin, Valery V.

    2007-02-01

    Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of the color images a digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel and optimization of already existing methods of non-invasive glaucoma diagnostics.

  9. Technology Readiness Level (TRL) Advancement of the MSPI On-Board Processing Platform for the ACE Decadal Survey Mission

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.; Wilson, Thor O.

    2011-01-01

    The Xilinx Virtex-5QV is a new Single-event Immune Reconfigurable FPGA (SIRF) device that is targeted as the spaceborne processor for the NASA Decadal Survey Aerosol-Cloud-Ecosystem (ACE) mission's Multiangle SpectroPolarimetric Imager (MSPI) instrument, currently under development at JPL. A key technology needed for MSPI is on-board processing (OBP) to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's ESTO1 AIST2 Program, JPL is demonstrating how signal data at 95 Mbytes/sec over 16 channels for each of the 9 multi-angle cameras can be reduced to 0.45 Mbytes/sec, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information. This is done via a least-squares fitting algorithm implemented on the Virtex-5 FPGA operating in real-time on the raw video data stream.

  10. Impediment to Spirit Drive on Sol 1806

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The hazard avoidance camera on the front of NASA's Mars Exploration Rover Spirit took this image after a drive by Spirit on the 1,806th Martian day, or sol, (January 31, 2009) of Spirit's mission on the surface of Mars.

    The wheel at the bottom right of the image is Spirit's right-front wheel. Because that wheel no longer turns, Spirit drives backwards dragging that wheel. The drive on Sol 1806 covered about 30 centimeters (1 foot). The rover team had planned a longer drive, but Spirit stopped short, apparently from the right front wheel encountering the partially buried rock visible next to that wheel.

    The hazard avoidance cameras on the front and back of the rover provide wide-angle views. The hill on the horizon in the right half of this image is Husband Hill. Spirit reached the summit of Husband Hill in 2005.

  11. Saturn's F-Ring

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This narrow-angle camera image of Saturn's F Ring was taken through the Clear filter while at a distance of 6.9 million km from Saturn on 8 November 1980. The brightness variations of this tightly-constrained ring shown here indicate that the ring is less uniform in makeup than the larger rings. JPL managed the Voyager Project for NASA's Office of Space Science

  12. Correlation peak analysis applied to a sequence of images using two different filters for eye tracking model

    NASA Astrophysics Data System (ADS)

    Patrón, Verónica A.; Álvarez Borrego, Josué; Coronel Beltrán, Ángel

    2015-09-01

    Eye tracking has many useful applications that range from biometrics to face recognition and human-computer interaction. The analysis of the characteristics of the eyes has become one of the methods to accomplish the location of the eyes and the tracking of the point of gaze. Characteristics such as the contrast between the iris and the sclera, the shape, and distribution of colors and dark/light zones in the area are the starting point for these analyses. In this work, the focus will be on the contrast between the iris and the sclera, performing a correlation in the frequency domain. The images are acquired with an ordinary camera, which with were taken images of thirty-one volunteers. The reference image is an image of the subjects looking to a point in front of them at 0° angle. Then sequences of images are taken with the subject looking at different angles. These images are processed in MATLAB, obtaining the maximum correlation peak for each image, using two different filters. Each filter were analyzed and then one was selected, which is the filter that gives the best performance in terms of the utility of the data, which is displayed in graphs that shows the decay of the correlation peak as the eye moves progressively at different angle. This data will be used to obtain a mathematical model or function that establishes a relationship between the angle of vision (AOV) and the maximum correlation peak (MCP). This model will be tested using different input images from other subject not contained in the initial database, being able to predict angle of vision using the maximum correlation peak data.

  13. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  14. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  15. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  16. Simultaneous in-plane and out-of-plane displacement measurement based on a dual-camera imaging system and its application to inspection of large-scale space structures

    NASA Astrophysics Data System (ADS)

    Ri, Shien; Tsuda, Hiroshi; Yoshida, Takeshi; Umebayashi, Takashi; Sato, Akiyoshi; Sato, Eiichi

    2015-07-01

    Optical methods providing full-field deformation data have potentially enormous interest for mechanical engineers. In this study, an in-plane and out-of-plane displacement measurement method based on a dual-camera imaging system is proposed. The in-plane and out-of-plane displacements are determined simultaneously using two measured in-plane displacement data observed from two digital cameras at different view angles. The fundamental measurement principle and experimental results of accuracy confirmation are presented. In addition, we applied this method to the displacement measurement in a static loading and bending test of a solid rocket motor case (CFRP material; 2.2 m diameter and 2.3 m long) for an up-to-date Epsilon rocket developed by JAXA. The effectiveness and measurement accuracy is confirmed by comparing with conventional displacement sensor. This method could be useful to diagnose the reliability of large-scale space structures in the rocket development.

  17. The 1997 Spring Regression of the Martian South Polar Cap: Mars Orbiter Camera Observations

    USGS Publications Warehouse

    James, P.B.; Cantor, B.A.; Malin, M.C.; Edgett, K.; Carr, M.H.; Danielson, G.E.; Ingersoll, A.P.; Davies, M.E.; Hartmann, W.K.; McEwen, A.S.; Soderblom, L.A.; Thomas, P.C.; Veverka, J.

    2000-01-01

    The Mars Orbiter cameras (MOC) on Mars Global Surveyor observed the south polar cap of Mars during its spring recession in 1997. The images acquired by the wide angle cameras reveal a pattern of recession that is qualitatively similar to that observed by Viking in 1977 but that does differ in at least two respects. The 1977 recession in the 0o to 120o longitude sector was accelerated relative to the 1997 observations after LS = 240o; the Mountains of Mitchel also detached from the main cap earlier in 1997. Comparison of the MOC images with Mars Orbiter Laser Altimeter data shows that the Mountains of Mitchel feature is controlled by local topography. Relatively dark, low albedo regions well within the boundaries of the seasonal cap were observed to have red-to-violet ratios that characterize them as frost units rather than unfrosted or partially frosted ground; this suggests the possibility of regions covered by CO2 frost having different grain sizes.

  18. SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.

    PubMed

    Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S

    2012-06-01

    To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.

  19. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    PubMed

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  20. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  1. Tomographic PIV behind a prosthetic heart valve

    NASA Astrophysics Data System (ADS)

    Hasler, D.; Landolt, A.; Obrist, D.

    2016-05-01

    The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry. Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase-averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.

  2. Scintillating C Ring

    NASA Image and Video Library

    2007-01-16

    Both luminous and translucent, the C ring sweeps out of the darkness of Saturn's shadow and obscures the planet at lower left. The ring is characterized by broad, isolated bright areas, or "plateaus," surrounded by fainter material. This view looks toward the unlit side of the rings from about 19 degrees above the ringplane. North on Saturn is up. The dark, inner B ring is seen at lower right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 15, 2006 at a distance of approximately 632,000 kilometers (393,000 miles) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 56 degrees. Image scale is 34 kilometers (21 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA08855

  3. A Simple Instrument Designed to Provide Consistent Digital Facial Images in Dermatology

    PubMed Central

    Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu

    2013-01-01

    Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images. PMID:23723469

  4. A simple instrument designed to provide consistent digital facial images in dermatology.

    PubMed

    Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu

    2013-05-01

    Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images.

  5. Mapping the Apollo 17 Astronauts' Positions Based on LROC Data and Apollo Surface Photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Gläser, P.; Wählisch, M.; Robinson, M. S.

    2011-10-01

    The positions from where the Apollo 17 astronauts recorded panoramic image series, e.g. at the so-called "traverse stations", were precisely determined using ortho-images (0.5 m/pxl) as well as Digital Terrain Models (DTM) (1.5 m/pxl and 100 m/pxl) derived from Lunar Reconnaissance Orbiter Camera (LROC) data. Features imaged in the Apollo panoramas were identified in LROC ortho-images. Least-squares techniques were applied to angles measured in the panoramas to determine the astronaut's position to within the ortho-image pixel. The result of our investigation of Traverse Station 1 in the north-west of Steno Crater is presented.

  6. Sharpening Ejecta Patterns: Investigating Spectral Fidelity After Controlled Intensity-Hue-Saturation Image Fusion of LROC Images of Fresh Craters

    NASA Astrophysics Data System (ADS)

    Awumah, A.; Mahanti, P.; Robinson, M. S.

    2017-12-01

    Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.

  7. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  8. Determination of sub-daily glacier uplift and horizontal flow velocity with time-lapse images using ImGRAFT

    NASA Astrophysics Data System (ADS)

    Egli, Pascal; Mankoff, Ken; Mettra, François; Lane, Stuart

    2017-04-01

    This study investigates the application of feature tracking algorithms to monitoring of glacier uplift. Several publications have confirmed the occurrence of an uplift of the glacier surface in the late morning hours of the mid to late ablation season. This uplift is thought to be caused by high sub-glacial water pressures at the onset of melt caused by overnight-deposited sediment that blocks subglacial channels. We use time-lapse images from a camera mounted in front of the glacier tongue of Haut Glacier d'Arolla during August 2016 in combination with a Digital Elevation Model and GPS measurements in order to investigate the phenomenon of glacier uplift using the feature tracking toolbox ImGRAFT. Camera position is corrected for all images and the images are geo-rectified using Ground Control Points visible in every image. Changing lighting conditions due to different sun angles create substantial noise and complicate the image analysis. A small glacier uplift of the order of 5 cm over a time span of 3 hours may be observed on certain days, confirming previous research.

  9. Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team

    2018-01-01

    NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.

  10. Single-shot velocity-map imaging of attosecond light-field control at kilohertz rate.

    PubMed

    Süssmann, F; Zherebtsov, S; Plenge, J; Johnson, Nora G; Kübel, M; Sayler, A M; Mondes, V; Graf, C; Rühl, E; Paulus, G G; Schmischke, D; Swrschek, P; Kling, M F

    2011-09-01

    High-speed, single-shot velocity-map imaging (VMI) is combined with carrier-envelope phase (CEP) tagging by a single-shot stereographic above-threshold ionization (ATI) phase-meter. The experimental setup provides a versatile tool for angle-resolved studies of the attosecond control of electrons in atoms, molecules, and nanostructures. Single-shot VMI at kHz repetition rate is realized with a highly sensitive megapixel complementary metal-oxide semiconductor camera omitting the need for additional image intensifiers. The developed camera software allows for efficient background suppression and the storage of up to 1024 events for each image in real time. The approach is demonstrated by measuring the CEP-dependence of the electron emission from ATI of Xe in strong (≈10(13) W/cm(2)) near single-cycle (4 fs) laser fields. Efficient background signal suppression with the system is illustrated for the electron emission from SiO(2) nanospheres. © 2011 American Institute of Physics

  11. Lunar Satellite Snaps Image of Earth

    NASA Image and Video Library

    2014-05-07

    This image, captured Feb. 1, 2014, shows a colorized view of Earth from the moon-based perspective of NASA's Lunar Reconnaissance Orbiter. Credit: NASA/Goddard/Arizona State University -- NASA's Lunar Reconnaissance Orbiter (LRO) experiences 12 "earthrises" every day, however LROC (short for LRO Camera) is almost always busy imaging the lunar surface so only rarely does an opportunity arise such that LROC can capture a view of Earth. On Feb. 1, 2014, LRO pitched forward while approaching the moon's north pole allowing the LROC Wide Angle Camera to capture Earth rising above Rozhdestvenskiy crater (112 miles, or 180 km, in diameter). Read more: go.nasa.gov/1oqMlgu NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. True 3-D View of 'Columbia Hills' from an Angle

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.'

    The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.

  13. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F.; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American Geophysical Union.

  14. The Mars Color Imager (MARCI) on the Mars Climate Orbiter

    NASA Astrophysics Data System (ADS)

    Malin, M. C.; Calvin, W.; Clancy, R. T.; Haberle, R. M.; James, P. B.; Lee, S. W.; Thomas, P. C.; Caplinger, M. A.

    2001-08-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (~6 × 6 × 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 × 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 13 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6° FOV, which covers ~40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals.

  15. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  16. New NASA Images of Irma's Towering Clouds (Anaglyph)

    NASA Image and Video Library

    2017-09-08

    On Sept. 7, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite passed over Hurricane Irma at approximately 11:20 am local time. The MISR instrument comprises nine cameras that view the Earth at different angles, and since it takes roughly seven minutes for all nine cameras to capture the same location, the motion of the clouds between images allows scientists to calculate the wind speed at the cloud tops. This stereo anaglyph combines two of the MISR angles to show a three-dimensional view of Irma. You will need red-blue glasses to view the anaglyph; place the red lens over your left eye. At this time, Irma's eye was located approximately 60 miles (100 kilometers) north of the Dominican Republic and 140 miles (230 kilometers) north of its capital, Santo Domingo. Irma was a powerful Category 5 hurricane, with wind speeds at the ocean surface up to 185 miles (300 kilometers) per hour. The MISR data show that at cloud top, winds near the eye wall (the most destructive part of the storm) were approximately 90 miles per hour (145 kilometers per hour), and the maximum cloud-top wind speed throughout the storm calculated by MISR was 135 miles per hour (220 kilometers per hour). While the hurricane's dominant rotation direction is counter-clockwise, winds near the eye wall are consistently pointing outward from it. This is an indication of outflow, the process by which a hurricane draws in warm, moist air at the surface and ejects cool, dry air at its cloud tops. https://photojournal.jpl.nasa.gov/catalog/PIA21945

  17. Tropical Storm Harvey Spotted by NASA's MISR

    NASA Image and Video Library

    2017-08-29

    On Aug. 27, 2017, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite passed over then-Tropical Storm Harvey about noon local time, the day after the storm first made landfall in Texas as a Category 4 hurricane. The MISR instrument is equipped with nine cameras that observe Earth at different angles over a time period of seven minutes. Geometric information from the multiple camera views is used to compute the cloud top heights, and motion of the clouds during the image sequence is used to calculate wind speed. This composite image shows the storm as viewed by the central, downward-looking camera (left), as well as the cloud top heights in kilometers (center) and the wind speeds (right) superimposed on the image. The length of the arrows is proportional to the wind speed, while their color shows the altitude at which the winds were calculated. Also included is an animation made by combining all nine images from the MISR cameras, showing the motion of the storm during the seven-minute period. At this time, the center of the tropical storm was located just northwest of the city of Victoria and maximum wind speeds on the ground were around 40 miles per hour (65 kilometers per hour) according to the National Oceanic and Atmospheric Administration (NOAA), which matches well with the near-surface winds calculated by MISR to the west of Corpus Christi. In the 36 hours or so since it had made landfall, Harvey had weakened considerably -- these images show that the eye had disappeared and much of the circular motion of storm had dissipated, as shown by the calculated wind directions. However, the area of very high clouds and strong winds near Houston shows that the storm was continuing to produce powerful rain bands. At this point, hydrographs managed by NOAA in downtown Houston were already recording flood stage at both the Buffalo Bayou (28 feet or 8.5 meters as of 12:15 p.m. CDT August 27) and the White Oak Bayou (40 feet or 12 meters at last record that morning). The MISR data show the storm clouds reaching an altitude of about 10 miles (16 kilometers). These data were captured during Terra orbit 94108. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21927

  18. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  19. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    NASA Astrophysics Data System (ADS)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  20. An augmented-reality edge enhancement application for Google Glass.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  1. Wide-field Fourier ptychographic microscopy using laser illumination source

    PubMed Central

    Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei

    2016-01-01

    Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016

  2. Small format digital photogrammetry for applications in the earth sciences

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, Dirk

    2010-05-01

    Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.

  3. Saturnian Dawn

    NASA Image and Video Library

    2017-06-26

    NASA's Cassini spacecraft peers toward a sliver of Saturn's sunlit atmosphere while the icy rings stretch across the foreground as a dark band. This view looks toward the unilluminated side of the rings from about 7 degrees below the ring plane. The image was taken in green light with the Cassini spacecraft wide-angle camera on March 31, 2017. The view was obtained at a distance of approximately 620,000 miles (1 million kilometers) from Saturn. Image scale is 38 miles (61 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21334

  4. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    DTIC Science & Technology

    2015-03-26

    camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene

  5. Mixing Waters and Moving Ships off the North Carolina Coast

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The estuarine and marine environments of the United States' eastern seaboard provide the setting for a variety of natural and human activities associated with the flow of water. This set of Multi-angle Imaging SpectroRadiometer images from October 11, 2000 (Terra orbit 4344) captures the intricate system of barrier islands, wetlands, and estuaries comprising the coastal environments of North Carolina and southern Virginia. On the right-hand side of the images, a thin line of land provides a tenuous separation between the Albemarle and Pamlico Sounds and the Atlantic Ocean. The wetland communities of this area are vital to productive fisheries and water quality.

    The top image covers an area of about 350 kilometers x 260 kilometers and is a true-color view from MISR's 46-degree backward-looking camera. Looking away from the Sun suppresses glint from the reflective water surface and enables mapping the color of suspended sediments and plant life near the coast. Out in the open sea, the dark blue waters indicate the Gulf Stream. As it flows toward the northeast, this ocean current presses close to Cape Hatteras (the pointed cape in the lower portion of the images), and brings warm, nutrient-poor waters northward from equatorial latitudes. North Carolina's Outer Banks are often subjected to powerful currents and storms which cause erosion along the east-facing shorelines. In an effort to save the historic Cape Hatteras lighthouse from the encroaching sea, it was jacked out of the ground and moved about 350 meters in 1999.

    The bottom image was created with red band data from the 46-degree backward, 70-degree forward, and 26-degree forward cameras displayed as red, green, and blue, respectively. The color variations in this multi-angle composite indicate different angular (rather than spectral) signatures. Here, the increased reflection of land vegetation at the angle viewing away from the Sun causes a reddish tint. Water, on the other hand, appears predominantly in shades of blue and green due to the bright sunglint captured by the forward-viewing cameras. Contrasting angular signatures, most likely associated with variations in the orientation and slope of wind-driven surface waves, are apparent in the sunglint patterns.

    Details of human activities are visible in these images. Near the top center, the Chesapeake Bay Bridge-Tunnel complex, which links Norfolk with Virginia's eastern shore, can be seen. The locations of two tunnels which route automobiles below the water appear as gaps in the visible roadway. In the top image, the small white specks in the open waters of the Atlantic Ocean are ship wakes. The movements of the ships have been visualized by displaying the views from MISR's four backward-viewing cameras in an animated sequence (below). These cameras successively observe the same surface locations over a time interval of about 160 seconds. The large version of the animation covers an area of 135 kilometers x 130 kilometers. The land area on the left-hand side includes the birthplace of aviation, Kitty Hawk, where the Wright Brothers made their first sustained, powered flight in 1903.

    [figure removed for brevity, see original site]

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  6. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  7. Phantom Limb

    NASA Image and Video Library

    2017-09-25

    The brightly lit limb of a crescent Enceladus looks ethereal against the blackness of space. The rest of the moon, lit by light reflected from Saturn, presents a ghostly appearance. Enceladus (313 miles or 504 kilometers across) is back-lit in this image, as is apparent by the thin crescent. However, the Sun-Enceladus-spacecraft (or phase) angle, at 141 degrees, is too low to make the moon's famous plumes easily visible. This view looks toward the Saturn-facing hemisphere of Enceladus. North on Enceladus is up. The above image is a composite of images taken with the Cassini spacecraft narrow-angle camera on March 29, 2017 using filters that allow infrared, green, and ultraviolet light. The image filter centered on 930 nm (IR) was is red in this image, the image filter centered on the green is green, and the image filter centered on 338 nm (UV) is blue. The view was obtained at a distance of approximately 110,000 miles (180,000 kilometers) from Enceladus. Image scale is 0.6 miles (1 kilometer) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21346

  8. Solar System Portrait - Earth as Pale Blue Dot

    NASA Image and Video Library

    1996-09-12

    This narrow-angle color image of the Earth, dubbed Pale Blue Dot, is a part of the first ever 'portrait' of the solar system taken by NASA’s Voyager 1. The spacecraft acquired a total of 60 frames for a mosaic of the solar system from a distance of more than 4 billion miles from Earth and about 32 degrees above the ecliptic. From Voyager's great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters -- violet, blue and green -- and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification. http://photojournal.jpl.nasa.gov/catalog/PIA00452

  9. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  10. MISR Multi-angle Views of Sunday Morning Fires

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Hot, dry Santa Ana winds began blowing through the Los Angeles and San Diego areas on Sunday October 21, 2007. Wind speeds ranging from 30 to 50 mph were measured in the area, with extremely low relative humidities. These winds, coupled with exceptionally dry conditions due to lack of rainfall resulted in a number of fires in the Los Angeles and San Diego areas, causing the evacuation of more than 250,000 people.

    These two images show the Southern California coast from Los Angeles to San Diego from two of the nine cameras on the Multi-angle Imaging SpectroRadiometer (MISR) instrument on the NASA EOS Terra satellite. These images were obtained around 11:35 a.m. PDT on Sunday morning, October 21, 2007 and show a number of plumes extending out over the Pacific ocean. In addition, locations identified as potential hot spots from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on the same satellite are outlined in red.

    The left image is from MISR's nadir looking camera and the plumes appear very faint. The image on the right is from MISR's 60o forward looking camera, which accentuates the amount of light scattered by aerosols in the atmosphere, including smoke and dust. Both these images are false color and contain information from MISR's red, green, blue and near-infrared wavelengths, which makes vegetated land appear greener than it would naturally. Notice in the right hand image that the color of the plumes associated with the MODIS hot spots is bluish, while plumes not associated with hot spots appear more yellow. This is because the latter plumes are composed of dust kicked up by the strong Santa Ana winds. In some locations along Interstate 5 on this date, visibility was severely reduced due to blowing dust. MISR's multiangle and multispectral capability give it the ability to distinguish smoke from dust in this situation.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These images were generated from a portion of the imagery acquired during Terra orbit 41713, and use data from blocks 63 to 66 within World Reference System-2 path 40.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center. JPL is a division of the California Institute of Technology.

  11. Dark Spots on Titan

    NASA Image and Video Library

    2005-05-02

    This recent image of Titan reveals more complex patterns of bright and dark regions on the surface, including a small, dark, circular feature, completely surrounded by brighter material. During the two most recent flybys of Titan, on March 31 and April 16, 2005, Cassini captured a number of images of the hemisphere of Titan that faces Saturn. The image at the left is taken from a mosaic of images obtained in March 2005 (see PIA06222) and shows the location of the more recently acquired image at the right. The new image shows intriguing details in the bright and dark patterns near an 80-kilometer-wide (50-mile) crater seen first by Cassini's synthetic aperture radar experiment during a Titan flyby in February 2005 (see PIA07368) and subsequently seen by the imaging science subsystem cameras as a dark spot (center of the image at the left). Interestingly, a smaller, roughly 20-kilometer-wide (12-mile), dark and circular feature can be seen within an irregularly-shaped, brighter ring, and is similar to the larger dark spot associated with the radar crater. However, the imaging cameras see only brightness variations, and without topographic information, the identity of this feature as an impact crater cannot be conclusively determined from this image. The visual infrared mapping spectrometer, which is sensitive to longer wavelengths where Titan's atmospheric haze is less obscuring -- observed this area simultaneously with the imaging cameras, so those data, and perhaps future observations by Cassini's radar, may help to answer the question of this feature's origin. The new image at the right consists of five images that have been added together and enhanced to bring out surface detail and to reduce noise, although some camera artifacts remain. These images were taken with the Cassini spacecraft narrow-angle camera using a filter sensitive to wavelengths of infrared light centered at 938 nanometers -- considered to be the imaging science subsystem's best spectral filter for observing the surface of Titan. This view was acquired from a distance of 33,000 kilometers (20,500 miles). The pixel scale of this image is 390 meters (0.2 miles) per pixel, although the actual resolution is likely to be several times larger. http://photojournal.jpl.nasa.gov/catalog/PIA06234

  12. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    PubMed

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  13. Contact lens assisted imaging with integrated flexible handheld probe for glaucoma diagnosis

    NASA Astrophysics Data System (ADS)

    Hong, Xun Jie Jeesmond; V. K., Shinoj; Murukeshan, V. M.; Baskaran, M.; Aung, Tin

    2017-06-01

    Angle closure glaucoma accounts for majority of the bilateral blindness in Asian countries such as Singapore, China, and India. Abnormalities in the optic nerve and aqueous outflow system are the most indicative clinical hallmarks for glaucoma of this clinical subtype. Traditional photographic imaging techniques to assess the drainage angle are contact based, and may expose patients to risk of corneal abrasion and infections. In addition, these procedures require the use of viscous ophthalmic gels as coupling medium to overcome the phenomenon of total internal reflection at the tear-air interface. In this paper, we propose an integrated flexible handheld probe consisting of a micro color CCD video camera and white light LEDs. The handheld probe is able to capture images of the fundus and opposite iridocorneal angle when placed at the central cornea or limbus respectively. Here, we propose the use of hydrogel contact lens as an index matching medium and better protective barrier, as an alternative to conventional ophthalmic gels. The proposed imaging system and methodology has been successfully tested on porcine eye samples, ex vivo. With its high repeatability, reproducibility, and a good safety profile, it is believed that the proposed imaging system and methodology will complement existing imaging modalities in the diagnosis and management of glaucoma.

  14. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  15. Adaptive illumination source for multispectral vision system applied to material discrimination

    NASA Astrophysics Data System (ADS)

    Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.

    2008-04-01

    A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.

  16. Laser differential image-motion monitor for characterization of turbulence during free-space optical communication tests.

    PubMed

    Brown, David M; Juarez, Juan C; Brown, Andrea M

    2013-12-01

    A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.

  17. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  18. Validation of geometric models for fisheye lenses

    NASA Astrophysics Data System (ADS)

    Schneider, D.; Schwalbe, E.; Maas, H.-G.

    The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.

  19. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control

    PubMed Central

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-01-01

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720

  20. Optimization of a protocol for myocardial perfusion scintigraphy by using an anthropomorphic phantom.

    PubMed

    Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos

    2014-01-01

    To develop a study aiming at optimizing myocardial perfusion imaging. Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The (99m)Tc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol.

  1. Spectrally-encoded color imaging

    PubMed Central

    Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.

    2010-01-01

    Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002

  2. Optimization of a protocol for myocardial perfusion scintigraphy by using an anthropomorphic phantom*

    PubMed Central

    Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos

    2014-01-01

    Objective To develop a study aiming at optimizing myocardial perfusion imaging. Materials and Methods Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The 99mTc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. Results The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. Conclusion The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol. PMID:25741088

  3. Co-registration of Laser Altimeter Tracks with Digital Terrain Models and Applications in Planetary Science

    NASA Technical Reports Server (NTRS)

    Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.

    2013-01-01

    We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.

  4. Memoris, A Wide Angle Camera For Bepicolombo

    NASA Astrophysics Data System (ADS)

    Cremonese, G.; Memoris Team

    In order to answer to the Announcement of Opportunity of ESA for the BepiColombo payload, we are working on a wide angle camera concept named MEMORIS (MEr- cury MOderate Resolution Imaging System). MEMORIS will performe stereoscopic images of the whole Mercury surface using two different channels at +/- 20 degrees from the nadir point. It will achieve a spatial resolution of 50m per pixel at 400 km from the surface (peri-Herm), corresponding to a vertical resolution of about 75m with the stereo performances. The scientific objectives will be addressed by MEMORIS may be identified as follows: Estimate of surface age based on crater counting Crater morphology and degrada- tion Stratigraphic sequence of geological units Identification of volcanic features and related deposits Origin of plain units from morphological observations Distribution and type of the tectonic structures Determination of relative age among the structures based on cross-cutting relationships 3D Tectonics Global mineralogical mapping of main geological units Identification of weathering products The last two items will come from the multispectral capabilities of the camera utilizing 8 to 12 (TBD) broad band filters. MEMORIS will be equipped by a further channel devoted to the observations of the tenuous exosphere. It will look at the limb on a given arc of the BepiColombo orbit, in so doing it will observe the exosphere above a surface latitude range of 25-75 degrees in the northern emisphere. The exosphere images will be obtained above the surface just observed by the other two channels, trying to find possible relantionship, as ground-based observations suggest. The exospheric channel will have four narrow-band filters centered on the sodium and potassium emissions and the adjacent continua.

  5. Direct Geolocation of Satellite Images with the EO-CFI Libraries

    NASA Astrophysics Data System (ADS)

    de Miguel, Eduardo; Prado, Elena; Estebanez, Monica; Martin, Ana I.; Gonzalez, Malena

    2016-08-01

    The INTA Remote Sensing Laboratory has implemented a tool for the direct geolocation of satellite images. The core of the tool is a C code based on the "Earth Observation Mission CFI SW" from ESA. The tool accepts different types of inputs for satellite attitude (euler angles, quaternions, default attitude models). Satellite position can be provided either in ECEF or ECI coordinates. The line of sight of each individual detector is imported from an external file or is generated by the tool from camera parameters. Global DEM ACE2 is used to define ground intersection of the LOS.The tool has been already tailored for georeferencing images from the forthcoming Spanish Earth Observation mission SEOSat/Ingenio, and for the camera APIS onboard the INTA cubesat OPTOS. The next step is to configure it for the geolocation of Sentinel 2 L1b images.The tool has been internally validated by different means. This validation shows that the tool is suitable for georeferencing images from high spatial resolution missions. As part of the validation efforts, a code for simulating orbital info for LEO missions using EO-CFI has been produced.

  6. MOC View of Mars98 Landing Zone - 12/24/97

    NASA Technical Reports Server (NTRS)

    1998-01-01

    On 12/24/1997 at shortly after 08:17 UTC SCET, the Mars Global Surveyor Mars Orbiter Camera (MOC) took this high resolution image of a small portion of the potential Mars Surveyor '98 landing zone. For the purposes of planning MOC observations, this zone was defined as 75 +/- 2 degrees S latitude, 215 +/- 15 degrees W longitude. The image ran along the western perimeter of the Mars98 landing zone (e.g., near 245oW longitude). At that longitude, the layered deposits are farther south than at the prime landing longitude. The images were shifted in latitude to fall onto the layered deposits. The location of the image was selected to try to cover a range of possible surface morphologies, reliefs, and albedos.

    This image is approximately 81.5 km long by 31 km wide. It covers an area of about 2640 sq. km. The center of the image is at 80.46oS, 243.12 degrees W. The viewing conditions are: emission angle 56.30 degrees, incidence angle 58.88 degrees, phase of 30.31 degrees, and 15.15 meters/pixel resolution. North is to the top of the image.

    The effects of ground fog, which obscures the surface features(left), has been minimize by filtering (right).

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  7. MOC View of Mars98 Landing Zone - 12/24/97

    NASA Technical Reports Server (NTRS)

    1998-01-01

    On 12/24/1997 at shortly after 08:17 UTC SCET, the Mars Global Surveyor Mars Orbiter Camera (MOC) took this high resolution image of a small portion of the potential Mars Surveyor '98 landing zone. For the purposes of planning MOC observations, this zone was defined as 75 +/- 2 degrees S latitude, 215 +/- 15 degrees W longitude. The image ran along the western perimeter of the Mars98 landing zone (e.g., near 245oW longitude). At that longitude, the layered deposits are farther south than at the prime landing longitude. The images were shifted in latitude to fall onto the layered deposits. The location of the image was selected to try to cover a range of possible surface morphologies, reliefs, and albedos.

    This image is approximately 83.3 km long by 31.7 km wide. It covers an area of about 2750 sq. km. The center of the image is at 81.97 degrees S, 246.74 degrees W. The viewing conditions are: emission angle 58.23 degrees, incidence angle 60.23 degrees, phase of 30.34 degrees, and 15.49 meters/pixel resolution. North is to the top of the image.

    The effects of ground fog, which obscures the surface features(left), has been minimize by filtering (right).

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  8. Mars Orbiter Camera Acquires High Resolution Stereoscopic Images of the Viking One Landing Site

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Two MOC images of the vicinity of the Viking Lander 1 (MOC 23503 and 25403), acquired separately on 12 April 1998 at 08:32 PDT and 21 April 1998 at 13:54 PDT (respectively), are combined here in a stereoscopic anaglyph. The more recent, slightly better quality image is in the red channel, while the earlier image is shown in the blue and green channels. Only the overlap portion of the images is included in the composite.

    Image 23503 was taken at a viewing angle of 31.6o from vertical; 25403 was taken at an angle of 22.4o, for a difference of 9.4o. Although this is not as large a difference as is typically used in stereo mapping, it is sufficient to provide some indication of relief, at least in locations of high relief.

    The image shows the raised rims and deep interiors of the larger impact craters in the area (the largest crater is about 650 m/2100 feet across). It shows that the relief on the ridges is very subtle, and that, in general, the Viking landing site is very flat. This result is, of course, expected: the VL-1 site was chosen specifically because it was likely to have low to very low slopes that represented potential hazards to the spacecraft.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  9. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  10. The Atlases of Vesta derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2013-12-01

    The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsef, G

    Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less

  12. Use of cameras for monitoring visibility impairment

    NASA Astrophysics Data System (ADS)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  13. Quantifying seasonal variation of leaf area index using near-infrared digital camera in a rice paddy

    NASA Astrophysics Data System (ADS)

    Hwang, Y.; Ryu, Y.; Kim, J.

    2017-12-01

    Digital camera has been widely used to quantify leaf area index (LAI). Numerous simple and automatic methods have been proposed to improve the digital camera based LAI estimates. However, most studies in rice paddy relied on arbitrary thresholds or complex radiative transfer models to make binary images. Moreover, only a few study reported continuous, automatic observation of LAI over the season in rice paddy. The objective of this study is to quantify seasonal variations of LAI using raw near-infrared (NIR) images coupled with a histogram shape-based algorithm in a rice paddy. As vegetation highly reflects the NIR light, we installed NIR digital camera 1.8 m above the ground surface and acquired unsaturated raw format images at one-hour intervals between 15 to 80 º solar zenith angles over the entire growing season in 2016 (from May to September). We applied a sub-pixel classification combined with light scattering correction method. Finally, to confirm the accuracy of the quantified LAI, we also conducted direct (destructive sampling) and indirect (LAI-2200) manual observations of LAI once per ten days on average. Preliminary results show that NIR derived LAI agreed well with in-situ observations but divergence tended to appear once rice canopy is fully developed. The continuous monitoring of LAI in rice paddy will help to understand carbon and water fluxes better and evaluate satellite based LAI products.

  14. Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor.

    PubMed

    Lee, Hyung-Chul; Jung, Chul-Woo; Kim, Hee Chan

    2017-01-01

    The discrepancy between spatial orientations of an endoscopic image and a physician's working environment can make it difficult to interpret endoscopic images. In this study, we developed and evaluated a device that corrects the endoscopic image orientation using an accelerometer and gyrosensor. The acceleration of gravity and angular velocity were retrieved from the accelerometer and gyrosensor attached to the handle of the endoscope. The rotational angle of the endoscope handle was calculated using a Kalman filter with transmission delay compensation. Technical evaluation of the orientation correction system was performed using a camera by comparing the optical rotational angle from the captured image with the rotational angle calculated from the sensor outputs. For the clinical utility test, fifteen anesthesiology residents performed a video endoscopic examination of an airway model with and without using the orientation correction system. The participants reported numbers written on papers placed at the left main, right main, and right upper bronchi of the airway model. The correctness and the total time it took participants to report the numbers were recorded. During the technical evaluation, errors in the calculated rotational angle were less than 5 degrees. In the clinical utility test, there was a significant time reduction when using the orientation correction system compared with not using the system (median, 52 vs. 76 seconds; P = .012). In this study, we developed a real-time endoscopic image orientation correction system, which significantly improved physician performance during a video endoscopic exam.

  15. New three-dimensional visualization system based on angular image differentiation

    NASA Astrophysics Data System (ADS)

    Montes, Juan D.; Campoy, Pascual

    1995-03-01

    This paper presents a new auto-stereoscopic system capable of reproducing static or moving 3D images by projection with horizontal parallax or with horizontal and vertical parallaxes. The working principle is based on the angular differentiation of the images which are projected onto the back side of the new patented screen. The most important features of this new system are: (1) Images can be seen by naked eye, without the use of glasses or any other aid. (2) The 3D view angle is not restricted by the angle of the optics making up the screen. (3) Fine tuning is not necessary, independently of the parallax and of the size of the 3D view angle. (4) Coherent light is not necessary neither in capturing the image nor in its reproduction, but standard cameras and projectors. (5) Since the images are projected, the size and depth of the reproduced scene is unrestricted. (6) Manufacturing cost is not excessive, due to the use of optics of large focal length, to the lack of fine tuning and to the use of the same screen several reproduction systems. (7) This technology can be used for any projection system: slides, movies, TV cannons,... A first prototype of static images has been developed and tested with a 3D view angle of 90 degree(s) and a photographic resolution over a planar screen of 900 mm, of diagonal length. Present developments have success on a dramatic size reduction of the projecting system and of its cost. Simultaneous tasks have been carried out on the development of a prototype of 3D moving images.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serabyn, E.; Liewer, K.; Huby, E.

    An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L ′-band observational mode is described, and an initial demonstration of the new capability is presented: a resolved image of the low-mass companion to HIP 79124, which had previously been detected by means of interferometry. With HIP 79124 B at a projected separation of 186.5 mas, both the small inner working angle of the vortex coronagraph and the related imaging improvements were crucial in imaging this close companion directly. Duemore » to higher Strehl ratios and more relaxed contrasts in L ′ band versus H band, this new coronagraphic capability will enable high-contrast, small-angle observations of nearby young exoplanets and disks on a par with those of shorter-wavelength extreme adaptive optics coronagraphs.« less

  17. Plateaus Up Close

    NASA Image and Video Library

    2017-04-10

    Saturn's C ring isn't uniformly bright. Instead, about a dozen regions of the ring stand out as noticeably brighter than the rest of the ring, while about half a dozen regions are devoid of ring material. Scientists call the bright regions "plateaus" and the devoid regions "gaps." Scientists have determined that the plateaus are relatively bright because they have higher particle density and reflect more light, but researchers haven't solved the trickier puzzle of how the plateaus are created and maintained. This view looks toward the sunlit side of the rings from about 62 degrees above the ring plane. The image was taken Jan. 9, 2017 in green light with the Cassini spacecraft's narrow-angle camera. Cassini obtained the image while approximately 194,000 miles (312,000 kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 67 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA20529

  18. Flow visualization and characterization of evaporating liquid drops

    NASA Technical Reports Server (NTRS)

    Chao, David F. (Inventor); Zhang, Nengli (Inventor)

    2004-01-01

    An optical system, consisting of drop-reflection image, reflection-refracted shadowgraphy and top-view photography, is used to measure the spreading and instant dynamic contact angle of a volatile-liquid drop on a non-transparent substrate. The drop-reflection image and the shadowgraphy is shown by projecting the images of a collimated laser beam partially reflected by the drop and partially passing through the drop onto a screen while the top view photograph is separately viewed by use of a camera video recorder and monitor. For a transparent liquid on a reflective solid surface, thermocapillary convection in the drop, induced by evaporation, can be viewed nonintrusively, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this technique clearly reveal that evaporation and thermocapillary convection greatly affect the spreading process and the characteristics of dynamic contact angle of the drop.

  19. The application of support vector machines to analysis of global satellite data sets from MlSR

    NASA Technical Reports Server (NTRS)

    Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Diner, David J.

    2005-01-01

    The Multi-angle Imaging Spectro Radiometer (MISR) is one of a suite of five instruments onboard NASA's Terra EOS satellite, launched in December 1999. Typical satellite imagers view the earth from a single direction, but MISR's cameras image the earth simultaneously from nine different directions in four spectral bands. In this way, MISR provides unique multiangle information about solar radiation scattered from clouds, aerosols and other terrestrial surfaces. One of the primary goals of the MISR mission is to improve our understanding of how clouds and aerosols affect the earth's global energy balance.

  20. Propeller Belts of Saturn

    NASA Image and Video Library

    2017-05-10

    This view from NASA's Cassini spacecraft is the sharpest ever taken of belts of the features called propellers in the middle part of Saturn's A ring. The propellers are the small, bright features that look like double dashes, visible on both sides of the wave pattern that crosses the image diagonally from top to bottom. The original discovery of propellers in this region in Saturn's rings was made using several images taken from very close to the rings during Cassini's 2004 arrival at Saturn. Those discovery images were of low resolution and were difficult to interpret, and there were few clues as to how the small propellers seen in those images were related to the larger propellers Cassini observed later in the mission. This image, for the first time, shows swarms of propellers of a wide range of sizes, putting the ones Cassini observed in its Saturn arrival images in context. Scientists will use this information to derive a "particle size distribution" for propeller moons, which is an important clue to their origins. The image was taken using the Cassini spacecraft's narrow-angle camera on April 19. The view was has an image scale of 0.24 mile (385 meters) per pixel, and was taken at a sun-ring-spacecraft angle, or phase angle, of 108 degrees. The view looks toward a point approximately 80,000 miles (129,000 kilometers) from Saturn's center. https://photojournal.jpl.nasa.gov/catalog/PIA21448

  1. A Dark Bend

    NASA Image and Video Library

    2016-09-05

    Saturn's rings appear to bend as they pass behind the planet's darkened limb due to refraction by Saturn's upper atmosphere. The effect is the same as that seen in an earlier Cassini view (see PIA20491), except this view looks toward the unlit face of the rings, while the earlier image viewed the rings' sunlit side. The difference in illumination brings out some noticeable differences. The A ring is much darker here, on the rings' unlit face, since its larger particles primarily reflect light back toward the sun (and away from Cassini's cameras in this view). The narrow F ring (at bottom), which was faint in the earlier image, appears brighter than all of the other rings here, thanks to the microscopic dust that is prevalent within that ring. Small dust tends to scatter light forward (meaning close to its original direction of travel), making it appear bright when backlit. (A similar effect has plagued many a driver with a dusty windshield when driving toward the sun.) This view looks toward the unilluminated side of the rings from about 19 degrees below the ring plane. The image was taken in red light with the Cassini spacecraft narrow-angle camera on July 24, 2016. The view was acquired at a distance of approximately 527,000 miles (848,000 kilometers) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 169 degrees. Image scale is 3 miles (5 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20497

  2. ARC-1986-AC86-7014

    NASA Image and Video Library

    1986-01-22

    Range : 2.7 million miles (1.7 million miles) P-29497C Tis Voyager 2, false color composite of Uranus demonstrates the usefulness of special filters in the Voyager cameras for revealing the presence of high altitude hazes in Uranus' atmosphere. The picture is a composite of images obtained through the single orange and two methane filters of Voyager's wide angle camera. Orange, short wavelength and long wavelength methane images are displayed, retrospectively, as blue, green, and orange. The pink area centered on the pole is due to the presence of hazes high in the atmosphere that reflect the light before it has traversed a long enough path through the atmosphere to suffer absorbtion by methane gas. The bluest region at mid-latitude represent the most haze free regions on Uranus, thus, deeper cloud levels can be detected in these areas.

  3. Microwave transient analyzer

    DOEpatents

    Gallegos, Cenobio H.; Ogle, James W.; Stokes, John L.

    1992-01-01

    A method and apparatus for capturing and recording indications of frequency content of electromagnetic signals and radiation is disclosed including a laser light source (12) and a Bragg cell (14) for deflecting a light beam (22) at a plurality of deflection angles (36) dependent upon frequency content of the signal. A streak camera (26) and a microchannel plate intensifier (28) are used to project Bragg cell (14) output onto either a photographic film (32) or a charge coupled device (CCD) imager (366). Timing markers are provided by a comb generator (50) and a one shot generator (52), the outputs of which are also routed through the streak camera (26) onto the film (32) or the CCD imager (366). Using the inventive method, the full range of the output of the Bragg cell (14) can be recorded as a function of time.

  4. Development of variable-magnification X-ray Bragg optics.

    PubMed

    Hirano, Keiichi; Yamashita, Yoshiki; Takahashi, Yumiko; Sugiyama, Hiroshi

    2015-07-01

    A novel X-ray Bragg optics is proposed for variable-magnification of an X-ray beam. This X-ray Bragg optics is composed of two magnifiers in a crossed arrangement, and the magnification factor, M, is controlled through the azimuth angle of each magnifier. The basic properties of the X-ray optics such as the magnification factor, image transformation matrix and intrinsic acceptance angle are described based on the dynamical theory of X-ray diffraction. The feasibility of the variable-magnification X-ray Bragg optics was verified at the vertical-wiggler beamline BL-14B of the Photon Factory. For X-ray Bragg magnifiers, Si(220) crystals with an asymmetric angle of 14° were used. The magnification factor was calculated to be tunable between 0.1 and 10.0 at a wavelength of 0.112 nm. At various magnification factors (M ≥ 1.0), X-ray images of a nylon mesh were observed with an air-cooled X-ray CCD camera. Image deformation caused by the optics could be corrected by using a 2 × 2 transformation matrix and bilinear interpolation method. Not only absorption-contrast but also edge-contrast due to Fresnel diffraction was observed in the magnified images.

  5. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina

    2018-01-01

    Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.

  6. Cloud fractions estimated from shipboard whole-sky camera and ceilometer observations between East Asia and Antarctica

    NASA Astrophysics Data System (ADS)

    Kuji, M.; Hagiwara, M.; Hori, M.; Shiobara, M.

    2017-12-01

    Shipboard observations on cloud fraction were carried out along the round research cruise between East Asia and Antarctica from November 2015 to Aril 2016 using a whole-sky camera and a ceilometer onboard Research Vessel (R/V) Shirase. We retrieved cloud fraction from the whole-sky camera based on the brightness and color of the images, while we estimated cloud fraction from the ceilometer as a cloud frequency of occurrence. As a result, the average cloud fractions over outward open ocean, sea ice region, and returning openocean were approximately 56% (60%), 44% (64%), and 67% (72%), respectively, with the whole-sky camera (ceilometer). The comparison of the daily-averaged cloud fractions from the whole-sky camera and the ceilometer, it is found that the correlation coefficient was 0.73 for the 129 match-up dataset between East Asia and Antarctica including sea ice region as well as open ocean. The results are qualitatively consistent between the two observations as a whole, but there exists some underestimation with the whole-sky camera compared to the ceilometer. One of the reasons is possibly that the imager is apt to dismiss an optically thinner clouds that can be detected by the ceilometer. On the other hand, the difference of their view angles between the imager and the ceilometer possibly affects the estimation. Therefore, it is necessary to elucidate the cloud properties with detailed match-up analyses in future. Another future task is to compare the cloud fractions with satellite observation such as MODIS cloud products. Shipboard observations in themselves are very valuable for the validation of products from satellite observation, because we do not necessarily have many validation sites over Southern Ocean and sea ice region in particular.

  7. Tethys the Spy

    NASA Image and Video Library

    2014-12-15

    Tethys appears to be peeking out from behind Rhea, watching the watcher. Scientists believe that Tethys' surprisingly high albedo is due to the water ice jets emerging from its neighbor, Enceladus. The fresh water ice becomes the E ring and can eventually arrive at Tethys, giving it a fresh surface layer of clean ice. Lit terrain seen here is on the anti-Saturn side of Rhea. North on Rhea is up. The image was taken in red light with the Cassini spacecraft narrow-angle camera on April 20, 2012. The view was obtained at a distance of approximately 1.1 million miles (1.8 million kilometers) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 59 degrees. Image scale is 7 miles (11 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18293

  8. Computer screen photo-excited surface plasmon resonance imaging.

    PubMed

    Filippini, Daniel; Winquist, Fredrik; Lundström, Ingemar

    2008-09-12

    Angle and spectra resolved surface plasmon resonance (SPR) images of gold and silver thin films with protein deposits is demonstrated using a regular computer screen as light source and a web camera as detector. The screen provides multiple-angle illumination, p-polarized light and controlled spectral radiances to excite surface plasmons in a Kretchmann configuration. A model of the SPR reflectances incorporating the particularities of the source and detector explain the observed signals and the generation of distinctive SPR landscapes is demonstrated. The sensitivity and resolution of the method, determined in air and solution, are 0.145 nm pixel(-1), 0.523 nm, 5.13x10(-3) RIU degree(-1) and 6.014x10(-4) RIU, respectively, encouraging results at this proof of concept stage and considering the ubiquity of the instrumentation.

  9. Polarimetric Observations of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Kim, S.

    2017-12-01

    Polarimetric images contain valuable information on the lunar surface such as grain size and porosity of the regolith, from which one can estimate the space weathering environment on the lunar surface. Surprisingly, polarimetric observation has never been conducted from the lunar orbit before. A Wide-Angle Polarimetric Camera (PolCam) has been recently selected as one of three Korean science instruments onboard the Korea Pathfinder Lunar Orbiter (KPLO), which is aimed to be launched in 2019/2020 as the first Korean lunar mission. PolCam will obtain 80 m-resolution polarimetric images of the whole lunar surface between -70º and +70º latitudes at 320, 430 and 750 nm bands for phase angles up to 115º. I will also discuss previous polarimetric studies on the lunar surface based on our ground-based observations.

  10. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. Flooding in the Aftermath of Hurricane Katrina

    NASA Technical Reports Server (NTRS)

    2005-01-01

    These views of the Louisiana and Mississippi regions were acquired before and one day after Katrina made landfall along the Gulf of Mexico coast, and highlight many of the changes to the rivers and vegetation that occurred between the two views. The images were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) on August 14 and August 30, 2005. These multiangular, multispectral false-color composites were created using red band data from MISR's 46o backward and forward-viewing cameras, and near-infrared data from MISR's nadir camera. Such a display causes water bodies and inundated soil to appear in blue and purple hues, and highly vegetated areas to appear bright green. The scene differentiation is a result of both spectral effects (living vegetation is highly reflective at near-infrared wavelengths whereas water is absorbing) and of angular effects (wet surfaces preferentially forward scatter sunlight). The two images were processed identically and extend from the regions of Greenville, Mississippi (upper left) to Mobile Bay, Alabama (lower right).

    There are numerous rivers along the Mississippi coast that were not apparent in the pre-Katrina image; the most dramatic of these is a new inlet in the Pascagoula River that was not apparent before Katrina. The post-Katrina flooding along the edges of Lake Pontchartrain and the city of New Orleans is also apparent. In addition, the agricultural lands along the Mississippi floodplain in the upper left exhibit stronger near-infrared brightness before Katrina. After Katrina, many of these agricultural areas exhibit a stronger signal to MISR's oblique cameras, indicating the presence of inundated soil throughout the floodplain. Note that clouds appear in a different spot for each view angle due to a parallax effect resulting from their height above the surface.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously, viewing the entire globe between 82o north and 82o south latitude every nine days. Each image covers an area of about 380 kilometers by 410 kilometers. The data products were generated from a portion of the imagery acquired during Terra orbits 30091 and 30324 and utilize data from blocks 64-67 within World Reference System-2 path 22.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is managed for NASA by the California Institute of Technology.

  12. High-speed fuel tracer fluorescence and OH radical chemiluminescence imaging in a spark-ignition direct-injection engine

    NASA Astrophysics Data System (ADS)

    Smith, James D.; Sick, Volker

    2005-11-01

    An innovative technique has been demonstrated to achieve crank-angle-resolved planar laser-induced fluorescence (PLIF) of fuel followed by OH* chemiluminescence imaging in a firing direct-injected spark-ignition engine. This study used two standard KrF excimer lasers to excite toluene for tracking fuel distribution. The intensified camera system was operated at single crank-angle resolution at 2000 revolutions per minute (RPM) for 500 consecutive cycles. Through this work, it has been demonstrated that toluene and OH* can be imaged through the same optical setup while similar signal levels are obtained from both species, even at these high rates. The technique is useful for studying correlations between fuel distribution and subsequent ignition and flame propagation without the limitations of phase-averaging imaging approaches. This technique is illustrated for the effect of exhaust gas recirculation on combustion and will be useful for studies of misfire causes. Finally, a few general observations are presented as to the effect of preignition fuel distribution on subsequent combustion.

  13. Detail on Dione False color

    NASA Image and Video Library

    2006-01-27

    The leading hemisphere of Dione displays subtle variations in color across its surface in this false color view. To create this view, ultraviolet, green and infrared images were combined into a single black and white picture that isolates and maps regional color differences. This "color map" was then superposed over a clear-filter image. The origin of the color differences is not yet understood, but may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil. Terrain visible here is on the moon's leading hemisphere. North on Dione (1,126 kilometers, or 700 miles across) is up and rotated 17 degrees to the right. All images were acquired with the Cassini spacecraft narrow-angle camera on Dec. 24, 2005 at a distance of approximately 597,000 kilometers (371,000 miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 21 degrees. Image scale is 4 kilometers (2 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA07688

  14. High-speed fuel tracer fluorescence and OH radical chemiluminescence imaging in a spark-ignition direct-injection engine.

    PubMed

    Smith, James D; Sick, Volker

    2005-11-01

    An innovative technique has been demonstrated to achieve crank-angle-resolved planar laser-induced fluorescence (PLIF) of fuel followed by OH* chemiluminescence imaging in a firing direct-injected spark-ignition engine. This study used two standard KrF excimer lasers to excite toluene for tracking fuel distribution. The intensified camera system was operated at single crank-angle resolution at 2000 revolutions per minute (RPM) for 500 consecutive cycles. Through this work, it has been demonstrated that toluene and OH* can be imaged through the same optical setup while similar signal levels are obtained from both species, even at these high rates. The technique is useful for studying correlations between fuel distribution and subsequent ignition and flame propagation without the limitations of phase-averaging imaging approaches. This technique is illustrated for the effect of exhaust gas recirculation on combustion and will be useful for studies of misfire causes. Finally, a few general observations are presented as to the effect of preignition fuel distribution on subsequent combustion.

  15. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  16. San Diego, California (with sunglint) as seen by Expedition Two crew

    NASA Image and Video Library

    2001-04-16

    ISS002-E-5661 (16 April 2001) --- As the International Space Station (ISS) recently passed over the Pacific Ocean, one of the Expedition Two crew members, using an 800mm lens on a digital still camera, photographed this high oblique image of the coastal metropolitan area of San Diego, California. The angle of the view allows one to see quite a distance inland.

  17. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur across all of the terrestrial planets.

  18. Polarization sensitive camera for the in vitro diagnostic and monitoring of dental erosion

    NASA Astrophysics Data System (ADS)

    Bossen, Anke; Rakhmatullina, Ekaterina; Lussi, Adrian; Meier, Christoph

    Due to a frequent consumption of acidic food and beverages, the prevalence of dental erosion increases worldwide. In an initial erosion stage, the hard dental tissue is softened due to acidic demineralization. As erosion progresses, a gradual tissue wear occurs resulting in thinning of the enamel. Complete loss of the enamel tissue can be observed in severe clinical cases. Therefore, it is essential to provide a diagnosis tool for an accurate detection and monitoring of dental erosion already at early stages. In this manuscript, we present the development of a polarization sensitive imaging camera for the visualization and quantification of dental erosion. The system consists of two CMOS cameras mounted on two sides of a polarizing beamsplitter. A horizontal linearly polarized light source is positioned orthogonal to the camera to ensure an incidence illumination and detection angles of 45°. The specular reflected light from the enamel surface is collected with an objective lens mounted on the beam splitter and divided into horizontal (H) and vertical (V) components on each associate camera. Images of non-eroded and eroded enamel surfaces at different erosion degrees were recorded and assessed with diagnostic software. The software was designed to generate and display two types of images: distribution of the reflection intensity (V) and a polarization ratio (H-V)/(H+V) throughout the analyzed tissue area. The measurements and visualization of these two optical parameters, i.e. specular reflection intensity and the polarization ratio, allowed detection and quantification of enamel erosion at early stages in vitro.

  19. Steepness of Slopes at the Luna-Glob Landing Sites: Estimating by the Shaded Area Percentage in the LROC NAC Images

    NASA Astrophysics Data System (ADS)

    Krasilnikov, S. S.; Basilevsky, A. T.; Ivanov, M. A.; Abdrakhimov, A. M.; Kokhanov, A. A.

    2018-03-01

    The paper presents estimates of the occurrence probability of slopes, whose steep surfaces could be dangerous for the landing of the Luna-Glob descent probe ( Luna-25) given the baseline of the span between the landing pads ( 3.5 m), for five potential landing ellipses. As a rule, digital terrain models built from stereo pairs of high-resolution images (here, the images taken by the Narrow Angle Camera onboard the Lunar Reconnaissance Orbiter (LROC NAC)) are used in such cases. However, the planned landing sites are at high latitudes (67°-74° S), which makes it impossible to build digital terrain models, since the difference in the observation angle of the overlapping images is insufficient at these latitudes. Because of this, to estimate the steepness of slopes, we considered the interrelation between the shaded area percentage in the image and the Sun angle over horizon at the moment of imaging. For five proposed landing ellipses, the LROC NAC images (175 images in total) with a resolution from 0.4 to 1.2 m/pixel were analyzed. From the results of the measurements in each of the ellipses, the dependence of the shaded area percentage on the solar angle were built, which was converted to the occurrence probability of slopes. For this, the data on the Apollo 16 landing region ware used, which is covered by both the LROC NAC images and the digital terrain model with high resolution. As a result, the occurrence probability of slopes with different steepness has been estimated on the baseline of 3.5 m for five landing ellipses according to the steepness categories of <7°, 7°-10°, 10°-15°, 15°-20°, and >20°.

  20. Wide-Field Optic for Autonomous Acquisition of Laser Link

    NASA Technical Reports Server (NTRS)

    Page, Norman A.; Charles, Jeffrey R.; Biswas, Abhijit

    2011-01-01

    An innovation reported in Two-Camera Acquisition and Tracking of a Flying Target, NASA Tech Briefs, Vol. 32, No. 8 (August 2008), p. 20, used a commercial fish-eye lens and an electronic imaging camera for initially locating objects with subsequent handover to an actuated narrow-field camera. But this operated against a dark-sky background. An improved solution involves an optical design based on custom optical components for the wide-field optical system that directly addresses the key limitations in acquiring a laser signal from a moving source such as an aircraft or a spacecraft. The first challenge was to increase the light collection entrance aperture diameter, which was approximately 1 mm in the first prototype. The new design presented here increases this entrance aperture diameter to 4.2 mm, which is equivalent to a more than 16 times larger collection area. One of the trades made in realizing this improvement was to restrict the field-of-view to +80 deg. elevation and 360 azimuth. This trade stems from practical considerations where laser beam propagation over the excessively high air mass, which is in the line of sight (LOS) at low elevation angles, results in vulnerability to severe atmospheric turbulence and attenuation. An additional benefit of the new design is that the large entrance aperture is maintained even at large off-axis angles when the optic is pointed at zenith. The second critical limitation for implementing spectral filtering in the design was tackled by collimating the light prior to focusing it onto the focal plane. This allows the placement of the narrow spectral filter in the collimated portion of the beam. For the narrow band spectral filter to function properly, it is necessary to adequately control the range of incident angles at which received light intercepts the filter. When this angle is restricted via collimation, narrower spectral filtering can be implemented. The collimated beam (and the filter) must be relatively large to reduce the incident angle down to only a few degrees. In the presented embodiment, the filter diameter is more than ten times larger than the entrance aperture. Specifically, the filter has a clear aperture of about 51 mm. The optical design is refractive, and is comprised of nine custom refractive elements and an interference filter. The restricted maximum angle through the narrow-band filter ensures the efficient use of a 2-nm noise equivalent bandwidth spectral width optical filter at low elevation angles (where the range is longest), at the expense of less efficiency for high elevations, which can be tolerated because the range at high elevation angles is shorter. The image circle is 12 mm in diameter, mapped to 80 x 360 of sky, centered on the zenith.

  1. Polar Cap Retreat

    NASA Technical Reports Server (NTRS)

    2004-01-01

    13 August 2004 This red wide angle Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a view of the retreating seasonal south polar cap in the most recent spring in late 2003. Bright areas are covered with frost, dark areas are those from which the solid carbon dioxide has sublimed away. The center of this image is located near 76.5oS, 28.2oW. The scene is large; it covers an area about 250 km (155 mi) across. The scene is illuminated by sunlight from the upper left.

  2. Waves on Saturn

    NASA Technical Reports Server (NTRS)

    2005-01-01

    An up-close look at Saturn's atmosphere shows wavelike structures in the planet's constantly changing clouds.

    Feathery striations in the lower right appear to be small-scale waves propagating at a higher altitude than the other cloud features.

    The image was taken with the Cassini spacecraft wide-angle camera on April 14, 2005, through a filter sensitive to wavelengths of infrared light centered at 727 nanometers and at a distance of approximately 386,000 kilometers (240,000 miles) from Saturn. The image scale is 19 kilometers (12 miles) per pixel.

  3. PIA07600

    NASA Image and Video Library

    2005-10-04

    During its time in orbit, Cassini has spotted many beautiful cat's eye-shaped patterns like the ones visible here. These patterns occur in places where the winds and the atmospheric density at one latitude are different from those at another latitude. The opposing east-west flowing cloud bands are the dominant patterns seen here and elsewhere in Saturn's atmosphere. Contrast in the image was enhanced to aid the visibility of atmospheric features. The image was taken with the Cassini spacecraft wide-angle camera on Aug. 20, 2005. http://photojournal.jpl.nasa.gov/catalog/PIA07600

  4. The Moon's North Pole

    NASA Image and Video Library

    2017-12-08

    NASA image release September 7, 2011 The Earth's moon has been an endless source of fascination for humanity for thousands of years. When at last Apollo 11 landed on the moon's surface in 1969, the crew found a desolate, lifeless orb, but one which still fascinates scientist and non-scientist alike. This image of the moon's north polar region was taken by the Lunar Reconnaissance Orbiter Camera, or LROC. One of the primary scientific objectives of LROC is to identify regions of permanent shadow and near-permanent illumination. Since the start of the mission, LROC has acquired thousands of Wide Angle Camera images approaching the north pole. From these images, scientists produced this mosaic, which is composed of 983 images taken over a one month period during northern summer. This mosaic shows the pole when it is best illuminated, regions that are in shadow are candidates for permanent shadow. Image Credit: NASA/GSFC/Arizona State University NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Looking Up to the Giant

    NASA Image and Video Library

    2015-08-03

    Thanks to the illumination angle, Mimas (right) and Dione (left) appear to be staring up at a giant Saturn looming in the background. Although certainly large enough to be noticeable, moons like Mimas (246 miles or 396 kilometers across) and Dione (698 miles or 1123 kilometers across) are tiny compared to Saturn (75,400 miles or 120,700 kilometers across). Even the enormous moon Titan (3,200 miles or 5,150 kilometers across) is dwarfed by the giant planet. This view looks toward the unilluminated side of the rings from about one degree of the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on May 27, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 728 nanometers. The view was obtained at a distance of approximately 634,000 miles (one million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 85 degrees. Image scale is 38 miles (61 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18331

  6. Solutions on a high-speed wide-angle zoom lens with aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Yamanashi, Takanori

    2012-10-01

    Recent development in CMOS and digital camera technology has accelerated the business and market share of digital cinematography. In terms of optical design, this technology has increased the need to carefully consider pixel pitch and characteristics of the imager. When the field angle at the wide end, zoom ratio, and F-number are specified, choosing an appropriate zoom lens type is crucial. In addition, appropriate power distributions and lens configurations are required. At points near the wide end of a zoom lens, it is known that an aspheric surface is an effective means to correct off-axis aberrations. On the other hand, optical designers have to focus on manufacturability of aspheric surfaces and perform required analysis with respect to the surface shape. Centration errors aside, it is also important to know the sensitivity to aspheric shape errors and their effect on image quality. In this paper, wide angle cine zoom lens design examples are introduced and their main characteristics are described. Moreover, technical challenges are pointed out and solutions are proposed.

  7. Epimetheus Above the Rings

    NASA Image and Video Library

    2015-11-09

    Although Epimetheus appears to be lurking above the rings here, it's actually just an illusion resulting from the viewing angle. In reality, Epimetheus and the rings both orbit in Saturn's equatorial plane. Inner moons and rings orbit very near the equatorial plane of each of the four giant planets in our solar system, but more distant moons can have orbits wildly out of the equatorial plane. It has been theorized that the highly inclined orbits of the outer, distant moons are remnants of the random directions from which they approached the planets they orbit. This view looks toward the unilluminated side of the rings from about -0.3 degrees below the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 26, 2015. The view was obtained at a distance of approximately 500,000 miles (800,000 kilometers) from Epimetheus and at a Sun-Epimetheus-spacecraft, or phase, angle of 62 degrees. Image scale is 3 miles (5 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18342

  8. Experimental investigation on underwater trajectory deviation of high-speed projectile with different nose shapes

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Huang, Wei; Gao, Yubo; Qi, Yafei; Hypervelocity Impact Research Center Team

    2015-06-01

    Laboratory-scaled oblique water entry experiments for the trajectory stability in the water column have been performed with four different nosed-projectiles at a range of velocities from 20m /s to 250 m /s . The slender projectiles are designed with flat, ogival, hemi-sperical, truncated-ogival noses to make comparisons on the trajectory deviation when they are launched at vertical and oblique impact angles (0°~25°). Two high-speed cameras that are positioned orthogonal to each other and normal to the column are employed to capture the entire process of projectiles' penetration. From the experimental results, the sequential images in two planes are presented to compare the trajectory deviation of different impact tests and the 3D trajectory models are extracted based on the location recorded by cameras. Considering the effect influenced by the impact velocities and noses of projectiles, it merited concluded that trajectory deviation is affected from most by impact angle, and least by impact velocities. Additionally, ogival projectiles tend to be more sensitive to oblique angle and experienced the largest attitude changing. National Natural Science Foundation of China (NO.: 11372088).

  9. Toward high-resolution global topography of Mercury from MESSENGER orbital stereo imaging: A prototype model for the H6 (Kuiper) quadrangle

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Stark, Alexander; Oberst, Jürgen; Matz, Klaus-Dieter; Gwinner, Klaus; Roatsch, Thomas; Watters, Thomas R.

    2017-08-01

    We selected approximately 10,500 narrow-angle camera (NAC) and wide-angle camera (WAC) images of Mercury acquired from orbit by MESSENGER's Mercury Dual Imaging System (MDIS) with an average resolution of 150 m/pixel to compute a digital terrain model (DTM) for the H6 (Kuiper) quadrangle, which extends from 22.5°S to 22.5°N and from 288.0°E to 360.0°E. From the images, we identified about 21,100 stereo image combinations consisting of at least three images each. We applied sparse multi-image matching to derive approximately 250,000 tie-points representing 50,000 ground points. We used the tie-points to carry out a photogrammetric block adjustment, which improves the image pointing and the accuracy of the ground point positions in three dimensions from about 850 m to approximately 55 m. We then applied high-density (pixel-by-pixel) multi-image matching to derive about 45 billion tie-points. Benefitting from improved image pointing data achieved through photogrammetric block adjustment, we computed about 6.3 billion surface points. By interpolation, we generated a DTM with a lateral spacing of 221.7 m/pixel (192 pixels per degree) and a vertical accuracy of about 30 m. The comparison of the DTM with Mercury Laser Altimeter (MLA) profiles obtained over four years of MESSENGER orbital operations reveals that the DTM is geometrically very rigid. It may be used as a reference to identify MLA outliers (e.g., when MLA operated at its ranging limit) or to map offsets of laser altimeter tracks, presumably caused by residual spacecraft orbit and attitude errors. After the relevant outlier removals and corrections, MLA profiles show excellent agreement with topographic profiles from H6, with a root mean square height difference of only 88 m.

  10. Design, simulation and experimental analysis of an anti-stray-light illumination system of fundus camera

    NASA Astrophysics Data System (ADS)

    Ma, Chen; Cheng, Dewen; Xu, Chen; Wang, Yongtian

    2014-11-01

    Fundus camera is a complex optical system for retinal photography, involving illumination and imaging of the retina. Stray light is one of the most significant problems of fundus camera because the retina is so minimally reflective that back reflections from the cornea and any other optical surface are likely to be significantly greater than the light reflected from the retina. To provide maximum illumination to the retina while eliminating back reflections, a novel design of illumination system used in portable fundus camera is proposed. Internal illumination, in which eyepiece is shared by both the illumination system and the imaging system but the condenser and the objective are separated by a beam splitter, is adopted for its high efficiency. To eliminate the strong stray light caused by corneal center and make full use of light energy, the annular stop in conventional illumination systems is replaced by a fiber-coupled, ring-shaped light source that forms an annular beam. Parameters including size and divergence angle of the light source are specially designed. To weaken the stray light, a polarized light source is used, and an analyzer plate is placed after beam splitter in the imaging system. Simulation results show that the illumination uniformity at the fundus exceeds 90%, and the stray light is within 1%. Finally, a proof-of-concept prototype is developed and retinal photos of an ophthalmophantom are captured. The experimental results show that ghost images and stray light have been greatly reduced to a level that professional diagnostic will not be interfered with.

  11. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  12. Investigating at the Moon With new Eyes: The Lunar Reconnaissance Orbiter Mission Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Hiesinger, H.; Robinson, M. S.; McEwen, A. S.; Turtle, E. P.; Eliason, E. M.; Jolliff, B. L.; Malin, M. C.; Thomas, P. C.

    The Lunar Reconnaissance Orbiter Mission Camera (LROC) H. Hiesinger (1,2), M.S. Robinson (3), A.S. McEwen (4), E.P. Turtle (4), E.M. Eliason (4), B.L. Jolliff (5), M.C. Malin (6), and P.C. Thomas (7) (1) Brown Univ., Dept. of Geological Sciences, Providence RI 02912, Harald_Hiesinger@brown.edu, (2) Westfaelische Wilhelms-University, (3) Northwestern Univ., (4) LPL, Univ. of Arizona, (5) Washington Univ., (6) Malin Space Science Systems, (7) Cornell Univ. The Lunar Reconnaissance Orbiter (LRO) mission is scheduled for launch in October 2008 as a first step to return humans to the Moon by 2018. The main goals of the Lunar Reconnaissance Orbiter Camera (LROC) are to: 1) assess meter and smaller- scale features for safety analyses for potential lunar landing sites near polar resources, and elsewhere on the Moon; and 2) acquire multi-temporal images of the poles to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near permanent illumination over a full lunar year. In addition, LROC will return six high-value datasets such as 1) meter-scale maps of regions of permanent or near permanent illumination of polar massifs; 2) high resolution topography through stereogrammetric and photometric stereo analyses for potential landing sites; 3) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 4) a global 100-m/pixel basemap with incidence angles (60-80 degree) favorable for morphologic interpretations; 5) images of a variety of geologic units at sub-meter resolution to investigate physical properties and regolith variability; and 6) meter-scale coverage overlapping with Apollo Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to estimate hazards for future surface operations. LROC consists of two narrow-angle cameras (NACs) which will provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera (WAC) to acquire images at about 100 m/pixel in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). Each NAC has a 700-mm-focal-length optic that images onto a 5000-pixel CCD line-array, providing a cross-track field-of-view (FOV) of 2.86 degree. The NAC readout noise is better than 100 e- , and the data are sampled at 12 bits. Its internal buffer holds 256 MB of uncompressed data, enough for a full-swath image 25-km long or a 2x2 binned image 100-km long. The WAC has two 6-mm- focal-length lenses imaging onto the same 1000 x 1000 pixel, electronically shuttered CCD area-array, one imaging in the visible/near IR, and the other in the UV. Each has a cross-track FOV of 90 degree. From the nominal 50-km orbit, the WAC will have a resolution of 100 m/pixel in the visible, and a swath width of ˜100 km. The seven-band color capability of the WAC is achieved by color filters mounted directly 1 over the detector, providing different sections of the CCD with different filters [1]. The readout noise is less than 40 e- , and, as with the NAC, pixel values are digitized to 12-bits and may be subsequently converted to 8-bit values. The total mass of the LROC system is about 12 kg; the total LROC power consumption averages at 22 W (30 W peak). Assuming a downlink with lossless compression, LRO will produce a total of 20 TeraBytes (TB) of raw data. Production of higher-level data products will result in a total of 70 TB for Planetary Data System (PDS) archiving, 100 times larger than any previous missions. [1] Malin et al., JGR, 106, 17651-17672, 2001. 2

  13. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  14. Investigation of the use of thermography for research and clinical applications in pregnant women

    NASA Astrophysics Data System (ADS)

    Topalidou, Anastasia; Downe, Soo

    2016-03-01

    Background: The possibility of using thermal imaging, as a non-invasive method, in medicine may provide potential ability of advanced imaging. Objective: The conduction of a preliminary study in healthy non-pregnant females in order to investigate the imaging ability of thermography and its implementation; and to determine hot and cold areas in order to create a "map" of temperature distribution of the abdomen and the torso. Methods: Participants were 18-45 years old non-pregnant women (n = 10), who were measured at 4 different distances. Two thermal imaging cameras and their corresponding software were used to measure abdomen, low back, left and right side of the torso. Results: There were no statistically significant differences in the mean values of the exported temperatures according the distance and the angle between the camera and the subject. The inferior part of the rectus abdominis muscle recorded the coldest zone and the umbilicus appeared as the most prominent hot spot. Conclusions: Thermography shows to be a potential non-invasive technique offering new options in the evaluation of pregnant and laboring women.

  15. Lakes Through the Haze

    NASA Image and Video Library

    2013-12-23

    Using a special spectral filter, the high-resolution camera aboard NASA's Cassini spacecraft was able to peer through the hazy atmosphere of Saturn's moon Titan. It captured this image, which features the largest seas and some of the many hydrocarbon lakes that are present on Titan's surface. Titan is the only place in the solar system, other than Earth, that has stable liquids on its surface. In this case, the liquid consists of ethane and methane rather than water. This view looks towards the side of Titan (3,200 miles or 5,150 kilometers across) that leads in its orbit around Saturn. North on Titan is up and rotated 36 degrees to the left. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The images were taken with the Cassini spacecraft narrow-angle camera on Oct. 7, 2013. The view was acquired at a distance of approximately 809,000 miles (1.303 million kilometers) from Titan. Image scale is 5 miles (8 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA17179

  16. Line-based logo recognition through a web-camera

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolu; Wang, Yangsheng; Feng, Xuetao

    2007-11-01

    Logo recognition has gained much development in the document retrieval and shape analysis domain. As human computer interaction becomes more and more popular, the logo recognition through a web-camera is a promising technology in view of application. But for practical application, the study of logo recognition in real scene is much more difficult than the work in clear scene. To cope with the need, we make some improvements on conventional method. First, moment information is used to calculate the test image's orientation angle, which is used to normalize the test image. Second, the main structure of the test image, which is represented by lines patterns, is acquired and modified Hausdorff distance is employed to match the image and each of the existing templates. The proposed method, which is invariant to scale and rotation, gives good result and can work at real-time. The main contribution of this paper is that some improvements are introduced into the exiting recognition framework which performs much better than the original one. Besides, we have built a highly successful logo recognition system using our improved method.

  17. Extreme-UV lithography condenser

    DOEpatents

    Sweatt, William C.; Sweeney, Donald W.; Shafer, David; McGuire, James

    2001-01-01

    Condenser system for use with a ringfield camera in projection lithography where the condenser includes a series of segments of a parent aspheric mirror having one foci at a quasi-point source of radiation and the other foci at the radius of a ringfield have all but one or all of their beams translated and rotated by sets of mirrors such that all of the beams pass through the real entrance pupil of a ringfield camera about one of the beams and fall onto the ringfield radius as a coincident image as an arc of the ringfield. The condenser has a set of correcting mirrors with one of the correcting mirrors of each set, or a mirror that is common to said sets of mirrors, from which the radiation emanates, is a concave mirror that is positioned to shape a beam segment having a chord angle of about 25 to 85 degrees into a second beam segment having a chord angle of about 0 to 60 degrees.

  18. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  20. Return to Rhea

    NASA Image and Video Library

    2015-03-30

    After a couple of years in high-inclination orbits that limited its ability to encounter Saturn's moons, NASA's Cassini spacecraft returned to Saturn's equatorial plane in March 2015. As a prelude to its return to the realm of the icy satellites, the spacecraft had its first relatively close flyby of an icy moon (apart from Titan) in almost two years on Feb. 9. During this encounter Cassini's cameras captured images of the icy moon Rhea, as shown in these in two image mosaics. The views were taken about an hour and a half apart as Cassini drew closer to Rhea. Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create these enhanced color views, which offer an expanded range of the colors visible to human eyes in order to highlight subtle color differences across Rhea's surface. The moon's surface is fairly uniform in natural color. The image at right represents one of the highest resolution color views of Rhea released to date. A larger, monochrome mosaic is available in PIA07763. Both views are orthographic projections facing toward terrain on the trailing hemisphere of Rhea. An orthographic view is most like the view seen by a distant observer looking through a telescope. The views have been rotated so that north on Rhea is up. The smaller view at left is centered at 21 degrees north latitude, 229 degrees west longitude. Resolution in this mosaic is 450 meters (1,476 feet) per pixel. The images were acquired at a distance that ranged from about 51,200 to 46,600 miles (82,100 to 74,600 kilometers) from Rhea. The larger view at right is centered at 9 degrees north latitude, 254 degrees west longitude. Resolution in this mosaic is 300 meters (984 feet) per pixel. The images were acquired at a distance that ranged from about 36,000 to 32,100 miles (57,900 to 51,700 kilometers) from Rhea. The mosaics each consist of multiple narrow-angle camera (NAC) images with data from the wide-angle camera used to fill in areas where NAC data was not available. The image was produced by Heike Rosenberg and Tilmann Denk at Freie Universität in Berlin, Germany. http://photojournal.jpl.nasa.gov/catalog/PIA19057

Top