Sample records for multiband imaging camera

  1. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  2. Stellar Snowflake Cluster

    NASA Image and Video Library

    2005-12-22

    Newborn stars, hidden behind thick dust, are revealed in this image of a section of the Christmas Tree cluster from NASA Spitzer Space Telescope, created in joint effort between Spitzer infrared array camera and multiband imaging photometer instrument

  3. Global Temperature Measurement of Supercooled Water under Icing Conditions using Two-Color Luminescent Images and Multi-Band Filter

    NASA Astrophysics Data System (ADS)

    Tanaka, Mio; Morita, Katsuaki; Kimura, Shigeo; Sakaue, Hirotaka

    2012-11-01

    Icing occurs by a collision of a supercooled-water droplet on a surface. It can be seen in any cold area. A great attention is paid in an aircraft icing. To understand the icing process on an aircraft, it is necessary to give the temperature information of the supercooled water. A conventional technique, such as a thermocouple, is not valid, because it becomes a collision surface that accumulates ice. We introduce a dual-luminescent imaging to capture a global temperature distribution of supercooled water under the icing conditions. It consists of two-color luminescent probes and a multi-band filter. One of the probes is sensitive to the temperature and the other is independent of the temperature. The latter is used to cancel the temperature-independent luminescence of a temperature-dependent image caused by an uneven illumination and a camera location. The multi-band filter only selects the luminescent peaks of the probes to enhance the temperature sensitivity of the imaging system. By applying the system, the time-resolved temperature information of a supercooled-water droplet is captured.

  4. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    PubMed

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  5. Advanced imaging research and development at DARPA

    NASA Astrophysics Data System (ADS)

    Dhar, Nibir K.; Dat, Ravi

    2012-06-01

    Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.

  6. BOMBOLO: a Multi-Band, Wide-field, Near UV/Optical Imager for the SOAR 4m Telescope

    NASA Astrophysics Data System (ADS)

    Angeloni, R.; Guzmán, D.; Puzia, T. H.; Infante, L.

    2014-10-01

    BOMBOLO is a new multi-passband visitor instrument for SOAR observatory. The first fully Chilean instrument of its kind, it is a three-arms imager covering the near-UV and optical wavelengths. The three arms work simultaneously and independently, providing synchronized imaging capability for rapid astronomical events. BOMBOLO will be able to address largely unexplored events in the minute-to-second timescales, with the following leading science cases: 1) Simultaneous Multiband Flickering Studies of Accretion Phenomena; 2) Near UV/Optical Diagnostics of Stellar Evolutionary Phases; 3) Exoplanetary Transits and 4) Microlensing Follow-Up. BOMBOLO optical design consists of a wide field collimator feeding two dychroics at 390 and 550 nm. Each arm encompasses a camera, filter wheel and a science CCD230-42, imaging a 7 x 7 arcmin field of view onto a 2k x 2k image. The three CCDs will have different coatings to optimise the efficiencies of each camera. The detector controller to run the three cameras will be Torrent (the NOAO open-source system) and a PanView application will run the instrument and produce the data-cubes. The instrument is at Conceptual Design stage, having been approved by the SOAR Board of Directors as a visitor instrument in 2012 and having been granted full funding from CONICYT, the Chilean State Agency of Research, in 2013. The Design Phase is starting now and will be completed in late 2014, followed by a construction phase in 2015 and 2016A, with expected Commissioning in 2016B and 2017A.

  7. FIR filters for hardware-based real-time multi-band image blending

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Leblebici, Yusuf

    2015-02-01

    Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.

  8. Stellar Snowflake Cluster

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1 Stellar Snowflake Cluster Combined Image [figure removed for brevity, see original site] Figure 2 Infrared Array CameraFigure 3 Multiband Imaging Photometer

    Newborn stars, hidden behind thick dust, are revealed in this image of a section of the Christmas Tree cluster from NASA's Spitzer Space Telescope, created in joint effort between Spitzer's infrared array camera and multiband imaging photometer instruments.

    The newly revealed infant stars appear as pink and red specks toward the center of the combined image (fig. 1). The stars appear to have formed in regularly spaced intervals along linear structures in a configuration that resembles the spokes of a wheel or the pattern of a snowflake. Hence, astronomers have nicknamed this the 'Snowflake' cluster.

    Star-forming clouds like this one are dynamic and evolving structures. Since the stars trace the straight line pattern of spokes of a wheel, scientists believe that these are newborn stars, or 'protostars.' At a mere 100,000 years old, these infant structures have yet to 'crawl' away from their location of birth. Over time, the natural drifting motions of each star will break this order, and the snowflake design will be no more.

    While most of the visible-light stars that give the Christmas Tree cluster its name and triangular shape do not shine brightly in Spitzer's infrared eyes, all of the stars forming from this dusty cloud are considered part of the cluster.

    Like a dusty cosmic finger pointing up to the newborn clusters, Spitzer also illuminates the optically dark and dense Cone nebula, the tip of which can be seen towards the bottom left corner of each image.

    This combined image shows the presence of organic molecules mixed with dust as wisps of green, which have been illuminated by nearby star formation. The larger yellowish dots neighboring the baby red stars in the Snowflake Cluster are massive stellar infants forming from the same cloud. The blue dots sprinkled across the image represent older Milky Way stars at various distances along this line of sight. This image is a five-channel, false-color composite, showing emission from wavelengths of 3.6 and 4.5 microns (blue), 5.8 microns (cyan), 8 microns (green), and 24 microns (red).

    The top right (fig. 2) image from the infrared array camera show that the nebula is still actively forming stars. The wisps of red (represented as green in the combined image) are organic molecules mixed with dust, which has been illuminated by nearby star formation. The infrared array camera picture is a four-channel, false-color composite, showing emission from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8.0 microns (red).

    The bottom right image (fig. 3) from the multiband imaging photometer shows the colder dust of the nebula and unwraps the youngest stellar babies from their dusty covering. This is a false-color image showing emission at 24 microns (red).

  9. REMOTE SENSING IN OCEANOGRAPHY.

    DTIC Science & Technology

    remote sensing from satellites. Sensing of oceanographic variables from aircraft began with the photographing of waves and ice. Since then remote measurement of sea surface temperatures and wave heights have become routine. Sensors tested for oceanographic applications include multi-band color cameras, radar scatterometers, infrared spectrometers and scanners, passive microwave radiometers, and radar imagers. Remote sensing has found its greatest application in providing rapid coverage of large oceanographic areas for synoptic and analysis and

  10. Star Observations by Asteroid Multiband Imaging Camera (AMICA) on Hayabusa (MUSES-C) Cruising Phase

    NASA Astrophysics Data System (ADS)

    Saito, J.; Hashimoto, T.; Kubota, T.; Hayabusa AMICA Team

    Muses-C is the first Japanese asteroid mission and also a technology demonstration one to the S-type asteroid, 25143 Itokawa (1998SF36). It was launched at May 9, 2003, and renamed Hayabusa after the spacecraft was confirmed to be on the interplanetary orbit. This spacecraft has the event of the Earth-swingby for gravitational assist in the way to Itokawa on 2004 May. The arrival to Itokawa is scheduled on 2005 summer. During the visit to Itokawa, the remote-sensing observation with AMICA, NIRS (Near Infrared Spectrometer), XRS (X-ray Fluorescence Spectrometer), and LIDAR are performed, and the spacecraft descends and collects the surface samples at the touch down to the surface. The captured asteroid sample will be returned to the Earth in the middle of 2007. The telescopic optical navigation camera (ONC-T) with seven bandpass filters (and one wide-band filter) and polarizers is called AMICA (Asteroid Multiband Imaging CAmera) when ONC-T is used for scientific observations. The AMICA's seven bandpass filters are nearly equivalent to the seven filters of the ECAS (Eight Color Asteroid Survey) system. Obtained spectroscopic data will be compared with previously obtained ECAS observations. AMICA also has four polarizers, which are located on one edge of the CCD chip (covering 1.1 x 1.1 degrees each). Using the polarizers of AMICA, we can obtain polarimetric information of the target asteroid's surface. Since last November, we planned the test observations of some stars and planets by AMICA and could successfully obtain these images. Here, we briefly report these observations and its calibration by the ground-based observational data. In addition, we also present a current status of AMICA.

  11. The design and application of a multi-band IR imager

    NASA Astrophysics Data System (ADS)

    Li, Lijuan

    2018-02-01

    Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.

  12. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  13. Programmable Spectral Source and Design Tool for 3D Imaging Using Complementary Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam (Inventor); Korniski, Ronald J. (Inventor); Ream, Allen (Inventor); Shearn, Michael J. (Inventor); Shahinian, Hrayr Karnig (Inventor); Fritz, Eric W. (Inventor)

    2017-01-01

    An endoscopic illumination system for illuminating a subject for stereoscopic image capture, includes a light source which outputs light; a first complementary multiband bandpass filter (CMBF) and a second CMBF, the first and second CMBFs being situated in first and second light paths, respectively, where the first CMBF and the second CMBF filter the light incident thereupon to output filtered light; and a camera which captures video images of the subject and generates corresponding video information, the camera receiving light reflected from the subject and passing through a pupil CMBF pair and a detection lens. The pupil CMBF includes a first pupil CMBF and a second pupil CMBF, the first pupil CMBF being identical to the first CMBF and the second pupil CMBF being identical to the second CMBF, and the detection lens includes one unpartitioned section that covers both the first pupil CMBF and the second pupil CMBF.

  14. AMICA: The First camera for Near- and Mid-Infrared Astronomical Imaging at Dome C

    NASA Astrophysics Data System (ADS)

    Straniero, O.; Dolci, M.; Valentini, A.; Valentini, G.; di Rico, G.; Ragni, M.; Giuliani, C.; di Cianno, A.; di Varano, I.; Corcione, L.; Bortoletto, F.; D'Alessandro, M.; Magrin, D.; Bonoli, C.; Giro, E.; Fantinel, D.; Zerbi, F. M.; Riva, A.; de Caprio, V.; Molinari, E.; Conconi, P.; Busso, M.; Tosti, G.; Abia, C. A.

    AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging in the near- (1{-}5 μm) and mid- (5 27 μm) infrared wavelength regions. Equipped with two detectors, an InSb 2562 and a Si:As 1282 IBC, cooled at 35 and 7 K respectively, it will be the first instrument to investigate the potential of the Italian-French base Concordia for IR astronomy. The main technical challenge is represented by the extreme conditions of Dome C (T ˜ -90 °C, p ˜640 mbar). An environmental control system ensures the correct start-up, shut-down and housekeeping of the various components of the camera. AMICA will be mounted on the IRAIT telescope and will perform survey-mode observations in the Southern sky. The first task is to provide important site-quality data. Substantial contributions to the solution of fundamental astrophysical quests, such as those related to late phases of stellar evolution and to star formation processes, are also expected.

  15. Spitzer Finds Clarity in the Inner Milky Way

    NASA Technical Reports Server (NTRS)

    2008-01-01

    More than 800,000 frames from NASA's Spitzer Space Telescope were stitched together to create this infrared portrait of dust and stars radiating in the inner Milky Way.

    As inhabitants of a flat galactic disk, Earth and its solar system have an edge-on view of their host galaxy, like looking at a glass dish from its edge. From our perspective, most of the galaxy is condensed into a blurry narrow band of light that stretches completely around the sky, also known as the galactic plane.

    In this mosaic the galactic plane is broken up into five components: the far-left side of the plane (top image); the area just left of the galactic center (second to top); galactic center (middle); the area to the right of galactic center (second to bottom); and the far-right side of the plane (bottom). From Earth, the top two panels are visible to the northern hemisphere, and the bottom two images to the southern hemisphere. Together, these panels represent more than 50 percent of our entire Milky Way galaxy.

    The swaths of green represent organic molecules, called polycyclic aromatic hydrocarbons, which are illuminated by light from nearby star formation, while the thermal emission, or heat, from warm dust is rendered in red. Star-forming regions appear as swirls of red and yellow, where the warm dust overlaps with the glowing organic molecules. The blue specks sprinkled throughout the photograph are Milky Way stars. The bluish-white haze that hovers heavily in the middle panel is starlight from the older stellar population towards the center of the galaxy.

    This is a three-color composite that shows infrared observations from two Spitzer instruments. Blue represents 3.6-micron light and green shows light of 8 microns, both captured by Spitzer's infrared array camera. Red is 24-micron light detected by Spitzer's multiband imaging photometer.

    The Galactic Legacy Infrared Mid-Plane Survey Extraordinaire team (GLIMPSE) used the telescope's infrared array camera to see light from newborn stars, old stars and polycyclic aromatic hydrocarbons. A second group, the Multiband Imaging Photometer for Spitzer Galactic Plane Survey team (MIPSGAL), imaged dust in the inner galaxy with Spitzer's multiband imaging photometer.

  16. Observation of Possible Lava Tube Skylights by SELENE cameras

    NASA Astrophysics Data System (ADS)

    Haruyama, Junichi; Hiesinger, Harald; van der Bogert, Carolyn

    We have discovered three deep hole-structures on the Moon in the Terrain Camera and Multi-band Imager on the SELENE. These holes are large depth to diameter ratios: Marius Hills Hole (MHH) is 65 m in diameter and 88-90 m in depth, Mare Tranquillitatis Hole (MTH) is 120 x 110 m in diameter and 180 m in depth, and Mare Ingenii Hole (MIH) is 140 x 110 m in diameter and deeper than 90 m. No volcanic material from the holes nor dike-relating pit craters is seen around the holes. They are possible lava tube skylights. These holes and possibly connected tubes have a lot of scientific interests and high potentialities as lunar bases.

  17. AMICA (Antarctic Multiband Infrared CAmera) project

    NASA Astrophysics Data System (ADS)

    Dolci, Mauro; Straniero, Oscar; Valentini, Gaetano; Di Rico, Gianluca; Ragni, Maurizio; Pelusi, Danilo; Di Varano, Igor; Giuliani, Croce; Di Cianno, Amico; Valentini, Angelo; Corcione, Leonardo; Bortoletto, Favio; D'Alessandro, Maurizio; Bonoli, Carlotta; Giro, Enrico; Fantinel, Daniela; Magrin, Demetrio; Zerbi, Filippo M.; Riva, Alberto; Molinari, Emilio; Conconi, Paolo; De Caprio, Vincenzo; Busso, Maurizio; Tosti, Gino; Nucciarelli, Giuliano; Roncella, Fabio; Abia, Carlos

    2006-06-01

    The Antarctic Plateau offers unique opportunities for ground-based Infrared Astronomy. AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging from Dome-C in the near- (1 - 5 μm) and mid- (5 - 27 μm) infrared wavelength regions. The camera consists of two channels, equipped with a Raytheon InSb 256 array detector and a DRS MF-128 Si:As IBC array detector, cryocooled at 35 and 7 K respectively. Cryogenic devices will move a filter wheel and a sliding mirror, used to feed alternatively the two detectors. Fast control and readout, synchronized with the chopping secondary mirror of the telescope, will be required because of the large background expected at these wavelengths, especially beyond 10 μm. An environmental control system is needed to ensure the correct start-up, shut-down and housekeeping of the camera. The main technical challenge is represented by the extreme environmental conditions of Dome C (T about -90 °C, p around 640 mbar) and the need for a complete automatization of the overall system. AMICA will be mounted at the Nasmyth focus of the 80 cm IRAIT telescope and will perform survey-mode automatic observations of selected regions of the Southern sky. The first goal will be a direct estimate of the observational quality of this new highly promising site for Infrared Astronomy. In addition, IRAIT, equipped with AMICA, is expected to provide a significant improvement in the knowledge of fundamental astrophysical processes, such as the late stages of stellar evolution (especially AGB and post-AGB stars) and the star formation.

  18. Archeological treasures protection based on early forest wildfire multi-band imaging detection system

    NASA Astrophysics Data System (ADS)

    Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.

    2012-10-01

    Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.

  19. VizieR Online Data Catalog: Spitzer observations of Taurus members (Luhman+, 2010)

    NASA Astrophysics Data System (ADS)

    Luhman, K. L.; Allen, P. R.; Espaillat, C.; Hartmann, L.; Calvet, N.

    2016-03-01

    For our census of the disk population in Taurus, we use images at 3.6, 4.5, 5.8, and 8.0um obtained with Spitzer's Infrared Array Camera (IRAC) and images at 24um obtained with the Multiband Imaging Photometer for Spitzer (MIPS). The cameras produced images with FWHM=1.6"-1.9" from 3.6 to 8.0um and FWHM=5.9" at 24um. The available data were obtained through Guaranteed Time Observations for PID = 6, 36, 37 (G. Fazio), 53 (G. Rieke), 94 (C. Lawrence), 30540 (G. Fazio, J. Houck), and 40302 (J. Houck), Director's Discretionary Time for PID = 462 (L. Rebull), Legacy programs for PID = 139, 173 (N. Evans), and 30816 (D. Padgett), and General Observer programs for PID = 3584 (D. Padgett), 20302 (P. Andre), 20386 (P. Myers), 20762 (J. Swift), 30384 (T. Bourke), 40844 (C. McCabe), and 50584 (D. Padgett). The IRAC and MIPS observations were performed through 180 and 137 Astronomical Observation Requests (AORs), respectively. The characteristics of the resulting images are summarized in Tables 1 and 2. (6 data files).

  20. Lunar and Planetary Science XXXV: Future Missions to the Moon

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This document contained the following topics: A Miniature Mass Spectrometer Module; SELENE Gamma Ray Spectrometer Using Ge Detector Cooled by Stirling Cryocooler; Lunar Elemental Composition and Investigations with D-CIXS X-Ray Mapping Spectrometer on SMART-1; X-Ray Fluorescence Spectrometer Onboard the SELENE Lunar Orbiter: Its Science and Instrument; Detectability of Degradation of Lunar Impact Craters by SELENE Terrain Camera; Study of the Apollo 16 Landing Site: As a Standard Site for the SELENE Multiband Imager; Selection of Targets for the SMART-1 Infrared Spectrometer (SIR); Development of a Telescopic Imaging Spectrometer for the Moon; The Lunar Seismic Network: Mission Update.

  1. Detection and mapping of volcanic rock assemblages and associated hydrothermal alteration with Thermal Infrared Multiband Scanner (TIMS) data Comstock Lode Mining District, Virginia City, Nevada

    NASA Technical Reports Server (NTRS)

    Taranik, James V.; Hutsinpiller, Amy; Borengasser, Marcus

    1986-01-01

    Thermal Infrared Multispectral Scanner (TIMS) data were acquired over the Virginia City area on September 12, 1984. The data were acquired at approximately 1130 hours local time (1723 IRIG). The TIMS data were analyzed using both photointerpretation and digital processing techniques. Karhuen-Loeve transformations were utilized to display variations in radiant spectral emittance. The TIMS image data were compared with color infrared metric camera photography, LANDSAT Thematic Mapper (TM) data, and key areas were photographed in the field.

  2. VizieR Online Data Catalog: Candidate stellar bowshock nebulae from MIR (Kobulnicky+, 2016)

    NASA Astrophysics Data System (ADS)

    Kobulnicky, H. A.; Chick, W. T.; Schurhammer, D. P.; Andrews, J. E.; Povich, M. S.; Munari, S. A.; Olivier, G. M.; Sorber, R. L.; Wernke, H. N.; Dale, D. A.; Dixon, D. M.

    2017-01-01

    Our team conducted a visual examination of mid-infrared images from SST and the WISE to locate bowshock nebula candidates. The SST data included several wide-area surveys conducted using the Infrared Array Camera (IRAC) in its 3.6, 4.5, 5.8, and 8.0um bandpasses, along with 24um data from the Multiband Imaging Photometer for Spitzer (MIPS). The SST beam size at these bands is 1.66, 1.72, 1.88, 1.98, and 6" FWHM, respectively. The WISE data include images at the 3.4, 4.6, 12, and 22um bandpasses, which have beam sizes of 6.1, 6.4, 6.5, and 12" FWHM, respectively. (1 data file).

  3. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  4. Ga:Ge array development

    NASA Technical Reports Server (NTRS)

    Young, Erick T.; Rieke, G. H.; Low, Frank J.; Haller, E. E.; Beeman, J. W.

    1989-01-01

    Work at the University of Arizona and at Lawrence Berkeley Laboratory on the development of a far infrared array camera for the Multiband Imaging Photometer on the Space Infrared Telescope Facility (SIRTF) is discussed. The camera design uses stacked linear arrays of Ge:Ga photoconductors to make a full two-dimensional array. Initial results from a 1 x 16 array using a thermally isolated J-FET readout are presented. Dark currents below 300 electrons s(exp -1) and readout noises of 60 electrons were attained. Operation of these types of detectors in an ionizing radiation environment are discussed. Results of radiation testing using both low energy gamma rays and protons are given. Work on advanced C-MOS cascode readouts that promise lower temperature operation and higher levels of performance than the current J-FET based devices is described.

  5. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  6. The Kaguya Mission: Science Achievements and Data Release

    NASA Astrophysics Data System (ADS)

    Kato, Manabu; Sasaki, Susumu; Takizawa, Yoshisada

    2010-05-01

    Lunar orbiter Kaguya (SELENE) has impacted the Moon on July 10, 2009. The Kaguya mission has completed to observe the whole Moon for total twenty months; checkout term of three months, nominal one of ten months, and the extension of seven months. In the extended mission before the impact the measurements of magnetic field and gamma-ray from lower orbits have been perrformed successfully in addition to low altitude observation by Terraine Camera, Multiband Imager, and HDTV Camera. New data of intense magnetic anomaly and GRS data with higher spacial resolution has been acquired to study elemental distribution and magnetism of the Moon. New information and insights have been brought to lunar sciences in topography, gra-vimetry, geology, mineralogy, lithology, plasma physics. On November 1, 2009 the Kaguya team has released science data to the public as an international promise. The archive data can be accessed through Kaguya homepage of JAXA. Image gallary and 3D GIS system have been also put on view from the same homepage.

  7. Computer processing of Mars Odyssey THEMIS IR imaging, MGS MOLA altimetry and Mars Express stereo imaging to locate Airy-0, the Mars prime meridian reference

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Neukum, Gerhard; Smith, David E.; Christensen, Philip; Neumann, Gregory; Albee, Arden; Caplinger, Michael; Seregina, N. V.; Kirk, Randolph L.

    The small crater Airy-0 was selected from Mariner 9 images to be the reference for the Mars prime meridian. Initial analyses were made in year 2000 to tie Viking Orbiter and Mars Orbiter Camera images of Airy-0 to the evolving Mars Orbiter Laser Altimeter global digital terrain model to improve the location accuracy of Airy-0. Based upon this tie and radiometric tracking of landers / rovers from earth, new expressions for the Mars spin axis direction, spin rate and prime meridian epoch value were produced to define the orientation of the Martian surface in inertial space over time. Now that the Mars Global Surveyor mission and the Mars Orbiter Laser Altimeter global digital terrain model are complete, a more exhaustive study has been performed to determine the location of Airy-0 relative to the global terrain grid. THEMIS IR image cubes of the Airy and Gale crater regions were tied to the global terrain grid using precision stereo photogrammetric image processing techniques. The Airy-0 location was determined to be within 50 meters of the currently defined IAU prime meridian, with this offset at the limiting absolute accuracy of the global terrain grid. Additional outputs of this study were a controlled multi-band photomosaic of Airy, precision alignment and geometric models of the ten THEMIS IR bands and a controlled multi-band photomosaic of Gale crater used to validate the Mars Surface Laboratory operational map products supporting their successful landing on Mars.

  8. MIPS - The Multiband Imaging Photometer for SIRTF. [Multiband Imaging Photometer for SIRTF

    NASA Technical Reports Server (NTRS)

    Rieke, G. H.; Lada, C.; Lebofsky, M.; Low, F.; Strittmatter, P.; Young, E.; Arens, J.; Beichman, C.; Gautier, T. N.; Werner, M.

    1986-01-01

    The Multiband Imaging Photometer for SIRTF (MIPS) is to be designed to reach as closely as possible the fundamental sensitivity and angular resolution limits for SIRTF over the 3 to 700 micron spectral region. It will use high performance photoconductive detectors from 3 to 200 micron with integrating JFET amplifiers. From 200 to 700 microns, the MIPS will use a bolometer cooled by an adiabatic demagnetization refrigerator. Over much of its operating range, the MIPS will make possible observations at and beyond the conventional Rayleigh diffraction limit of angular resolution.

  9. Three dimensional modelling for the target asteroid of HAYABUSA

    NASA Astrophysics Data System (ADS)

    Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.

    Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.

  10. Application of EREP, LANDSAT, and aircraft image data to environmental problems related to coal mining

    NASA Technical Reports Server (NTRS)

    Amato, R. V.; Russell, O. R.; Martin, K. R.; Wier, C. E.

    1975-01-01

    Remote sensing techniques were used to study coal mining sites within the Eastern Interior Coal Basin (Indiana, Illinois, and western Kentucky), the Appalachian Coal Basin (Ohio, West Virginia, and Pennsylvania) and the anthracite coal basins of northeastern Pennsylvania. Remote sensor data evaluated during these studies were acquired by LANDSAT, Skylab and both high and low altitude aircraft. Airborne sensors included multispectral scanners, multiband cameras and standard mapping cameras loaded with panchromatic, color and color infrared films. The research conducted in these areas is a useful prerequisite to the development of an operational monitoring system that can be peridically employed to supply state and federal regulatory agencies with supportive data. Further research, however, must be undertaken to systematically examine those mining processes and features that can be monitored cost effectively using remote sensors and for determining what combination of sensors and ground sampling processes provide the optimum combination for an operational system.

  11. New Views of a Familiar Beauty

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 2Figure 3Figure 4Figure 5

    This image composite compares the well-known visible-light picture of the glowing Trifid Nebula (left panel) with infrared views from NASA's Spitzer Space Telescope (remaining three panels). The Trifid Nebula is a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius.

    The false-color Spitzer images reveal a different side of the Trifid Nebula. Where dark lanes of dust are visible trisecting the nebula in the visible-light picture, bright regions of star-forming activity are seen in the Spitzer pictures. All together, Spitzer uncovered 30 massive embryonic stars and 120 smaller newborn stars throughout the Trifid Nebula, in both its dark lanes and luminous clouds. These stars are visible in all the Spitzer images, mainly as yellow or red spots. Embryonic stars are developing stars about to burst into existence. Ten of the 30 massive embryos discovered by Spitzer were found in four dark cores, or stellar 'incubators,' where stars are born. Astronomers using data from the Institute of Radioastronomy millimeter telescope in Spain had previously identified these cores but thought they were not quite ripe for stars. Spitzer's highly sensitive infrared eyes were able to penetrate all four cores to reveal rapidly growing embryos.

    Astronomers can actually count the individual embryos tucked inside the cores by looking closely at the Spitzer image taken by its infrared array camera (figure 4). This instrument has the highest spatial resolution of Spitzer's imaging cameras. The Spitzer image from the multiband imaging photometer (figure 5), on the other hand, specializes in detecting cooler materials. Its view highlights the relatively cool core material falling onto the Trifid's growing embryos. The middle panel is a combination of Spitzer data from both of these instruments.

    The embryos are thought to have been triggered by a massive 'type O' star, which can be seen as a white spot at the center of the nebula in all four images. Type O stars are the most massive stars, ending their brief lives in explosive supernovas. The small newborn stars probably arose at the same time as the O star, and from the same original cloud of gas and dust.

    The Spitzer infrared array camera image is a three-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 and 8.0 microns (red). The Spitzer multiband imaging photometer image (figure 3) shows 24-micron emissions. The Spitzer mosaic image combines data from these pictures, showing light of 4.5 microns (blue), 8.0 microns (green) and 24 microns (red). The visible-light image (figure 2) is from the National Optical Astronomy Observatory, Tucson, Ariz.

  12. Space Infrared Telescope Facility (SIRTF) science instruments

    NASA Technical Reports Server (NTRS)

    Ramos, R.; Hing, S. M.; Leidich, C. A.; Fazio, G.; Houck, J. R.

    1989-01-01

    Concepts of scientific instruments designed to perform infrared astronomical tasks such as imaging, photometry, and spectroscopy are discussed as part of the Space Infrared Telescope Facility (SIRTF) project under definition study at NASA/Ames Research Center. The instruments are: the multiband imaging photometer, the infrared array camera, and the infrared spectograph. SIRTF, a cryogenically cooled infrared telescope in the 1-meter range and wavelengths as short as 2.5 microns carrying multiple instruments with high sensitivity and low background performance, provides the capability to carry out basic astronomical investigations such as deep search for very distant protogalaxies, quasi-stellar objects, and missing mass; infrared emission from galaxies; star formation and the interstellar medium; and the composition and structure of the atmospheres of the outer planets in the solar sytem.

  13. Interleaved diffusion-weighted EPI improved by adaptive partial-Fourier and multi-band multiplexed sensitivity-encoding reconstruction

    PubMed Central

    Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei

    2014-01-01

    Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000

  14. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  15. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  16. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328

  17. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.

  18. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  19. Multi-Wavelength Views of Messier 81

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Click on individual images below for larger view

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    The magnificent spiral arms of the nearby galaxy Messier 81 are highlighted in this image from NASA's Spitzer Space Telescope. Located in the northern constellation of Ursa Major (which also includes the Big Dipper), this galaxy is easily visible through binoculars or a small telescope. M81 is located at a distance of 12 million light-years.

    The main image is a composite mosaic obtained with the multiband imaging photometer for Spitzer and the infrared array camera. Thermal infrared emission at 24 microns detected by the photometer (red, bottom left inset) is combined with camera data at 8.0 microns (green, bottom center inset) and 3.6 microns (blue, bottom right inset).

    A visible-light image of Messier 81, obtained at Kitt Peak National Observatory, a ground-based telescope, is shown in the upper right inset. Both the visible-light picture and the 3.6-micron near-infrared image trace the distribution of stars, although the Spitzer image is virtually unaffected by obscuring dust. Both images reveal a very smooth stellar mass distribution, with the spiral arms relatively subdued.

    As one moves to longer wavelengths, the spiral arms become the dominant feature of the galaxy. The 8-micron emission is dominated by infrared light radiated by hot dust that has been heated by nearby luminous stars. Dust in the galaxy is bathed by ultraviolet and visible light from nearby stars. Upon absorbing an ultraviolet or visible-light photon, a dust grain is heated and re-emits the energy at longer infrared wavelengths. The dust particles are composed of silicates (chemically similar to beach sand), carbonaceous grains and polycyclic aromatic hydrocarbons and trace the gas distribution in the galaxy. The well-mixed gas (which is best detected at radio wavelengths) and dust provide a reservoir of raw materials for future star formation.

    The 24-micron multiband imaging photometer image shows emission from warm dust heated by the most luminous young stars. The infrared-bright clumpy knots within the spiral arms show where massive stars are being born in giant H II (ionized hydrogen) regions. Studying the locations of these star forming regions with respect to the overall mass distribution and other constituents of the galaxy (e.g., gas) will help identify the conditions and processes needed for star formation.

  20. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  1. High-order multiband encoding in the heart.

    PubMed

    Cunningham, Charles H; Wright, Graham A; Wood, Michael L

    2002-10-01

    Spatial encoding with multiband selective excitation (e.g., Hadamard encoding) has been restricted to a small number of slices because the RF pulse becomes unacceptably long when more than about eight slices are encoded. In this work, techniques to shorten multiband RF pulses, and thus allow larger numbers of slices, are investigated. A method for applying the techniques while retaining the capability of adaptive slice thickness is outlined. A tradeoff between slice thickness and pulse duration is shown. Simulations and experiments with the shortened pulses confirmed that motion-induced excitation profile blurring and phase accrual were reduced. The connection between gradient hardware limitations, slice thickness, and flow sensitivity is shown. Excitation profiles for encoding 32 contiguous slices of 1-mm thickness were measured experimentally, and the artifact resulting from errors in timing of RF pulse relative to gradient was investigated. A multiband technique for imaging 32 contiguous 2-mm slices, with adaptive slice thickness, was developed and demonstrated for coronary artery imaging in healthy subjects. With the ability to image high numbers of contiguous slices, using relatively short (1-2 ms) RF pulses, multiband encoding has been advanced further toward practical application. Copyright 2002 Wiley-Liss, Inc.

  2. VizieR Online Data Catalog: The YSO population of LDN 1340 in infrared (Kun+, 2016)

    NASA Astrophysics Data System (ADS)

    Kun, M.; Wolf-Chase, G.; Moor, A.; Apai, D.; Balog, Z.; O'Linger-Luscusk, J.; Moriarty-Schieven, G. H.

    2016-07-01

    L1340 was observed by the Spitzer Space Telescope using Spitzer's Infrared Array Camera (IRAC) on 2009 March 16 and the Multiband Imaging Photometer for Spitzer (MIPS) on 2008 November 26 (Prog. ID: 50691, PI: G. Fazio). The IRAC observations covered ~1deg2 in all four bands. Moreover, a small part of the cloud, centered on RNO 7, was observed in the four IRAC bands on 2006 September 24 (Prog. ID: 30734, PI: D. Figer). We selected candidate YSOs from the Spitzer Enhanced Imaging Products (SEIP) Source List, containing 19745 point sources in the target field. High angular resolution near-infrared images of two small regions of L1340 were obtained on 2002 October 24 in the JHK bands, using the near-infrared camera Omega-Cass, mounted on the 3.5m telescope at the Calar Alto Observatory, Spain. The results for IRAS 02224+7227 have been shown in Kun et al. (2014, J/ApJ/795/L26). Here we present the results for RNO 7. To classify the evolutionary status of the color-selected candidate YSOs and obtain as complete a picture of the SFR and its YSO population as possible, we supplemented the Spitzer data with photometric data available in public databases. See section 2.3 for further details. (13 data files).

  3. Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)

    2018-01-01

    A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.

  4. VizieR Online Data Catalog: From optical to infrared photometry of SN 2013dy (Pan+, 2015)

    NASA Astrophysics Data System (ADS)

    Pan, Y.-C.; Foley, R. J.; Kromer, M.; Fox, O. D.; Zheng, W.; Challis, P.; Clubb, K. I.; Filippenko, A. V.; Folatelli, G.; Graham, M. L.; Hillebrandt, W.; Kirshner, R. P.; Lee, W. H.; Pakmor, R.; Patat, F.; Phillips, M. M.; Pignata, G.; Ropke, F.; Seitenzahl, I.; Silverman, J. M.; Simon, J. D.; Sternberg, A.; Stritzinger, M. D.; Taubenberger, S.; Vinko, J.; Wheeler, J. C.

    2017-11-01

    We obtained broad-band BVRI photometry of SN 2013dy with the 0.76m Katzman Automatic Imaging Telescope (KAIT; Filippenko et al. 2001ASPC..246..121F). The multiband images were observed with the KAIT4 filter set from -16d to +337d relative to B-band maximum (MJD=56501.105). We also obtained riZYJH photometry of SN 2013dy with the multichannel Reionization And Transients InfraRed camera (RATIR; Butler et al. 2012SPIE.8446E..10B) mounted on the 1.5m Johnson telescope at the Mexican Observatorio Astronomico Nacional on Sierra San Pedro Martir in Baja California, Mexico (Watson et al. 2012SPIE.8444E..5LW). Typical observations include a series of 80s exposures in the ri bands and 60s exposures in the ZYJH bands, with dithering between exposures. (2 data files).

  5. New Infrared Emission Features and Spectral Variations in Ngc 7023

    NASA Technical Reports Server (NTRS)

    Werner, M. W.; Uchida, K. I.; Sellgren, K.; Marengo, M.; Gordon, K. D.; Morris, P. W.; Houck, J. R.; Stansberry, J. A.

    2004-01-01

    We observed the reflection nebula NGC 7023, with the Short-High module and the long-slit Short-Low and Long-Low modules of the Infrared Spectrograph on the Spitzer Space Telescope. We also present Infrared Array Camera (IRAC) and Multiband Imaging Photometer for Spitzer (MIPS) images of NGC 7023 at 3.6, 4.5, 8.0, and 24 m. We observe the aromatic emission features (AEFs) at 6.2, 7.7, 8.6, 11.3, and 12.7 m, plus a wealth of weaker features. We find new unidentified interstellar emission features at 6.7, 10.1, 15.8, 17.4, and 19.0 m. Possible identifications include aromatic hydrocarbons or nanoparticles of unknown mineralogy. We see variations in relative feature strengths, central wavelengths, and feature widths, in the AEFs and weaker emission features, depending on both distance from the star and nebular position (southeast vs. northwest).

  6. Multi-Band Miniaturized Patch Antennas for a Compact, Shielded Microwave Breast Imaging Array.

    PubMed

    Aguilar, Suzette M; Al-Joumayly, Mudar A; Burfeindt, Matthew J; Behdad, Nader; Hagness, Susan C

    2013-12-18

    We present a comprehensive study of a class of multi-band miniaturized patch antennas designed for use in a 3D enclosed sensor array for microwave breast imaging. Miniaturization and multi-band operation are achieved by loading the antenna with non-radiating slots at strategic locations along the patch. This results in symmetric radiation patterns and similar radiation characteristics at all frequencies of operation. Prototypes were fabricated and tested in a biocompatible immersion medium. Excellent agreement was obtained between simulations and measurements. The trade-off between miniaturization and radiation efficiency within this class of patch antennas is explored via a numerical analysis of the effects of the location and number of slots, as well as the thickness and permittivity of the dielectric substrate, on the resonant frequencies and gain. Additionally, we compare 3D quantitative microwave breast imaging performance achieved with two different enclosed arrays of slot-loaded miniaturized patch antennas. Simulated array measurements were obtained for a 3D anatomically realistic numerical breast phantom. The reconstructed breast images generated from miniaturized patch array data suggest that, for the realistic noise power levels assumed in this study, the variations in gain observed across this class of multi-band patch antennas do not significantly impact the overall image quality. We conclude that these miniaturized antennas are promising candidates as compact array elements for shielded, multi-frequency microwave breast imaging systems.

  7. Improved colour matching technique for fused nighttime imagery with daytime colours

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Toet, Alexander

    2016-10-01

    Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials

  8. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  9. First Solar System Results of the Spitzer Space Telescope

    NASA Technical Reports Server (NTRS)

    VanCleve, J.; Cruikshank, D. P.; Stansberry, J. A.; Burgdorf, M. J.; Devost, D.; Emery, J. P.; Fazio, G.; Fernandez, Y. R.; Glaccum, W.; Grillmair, C.

    2004-01-01

    The Spitzer Space Telescope, formerly known as SIRTF, is now operational and delivers unprecedented sensitivity for the observation of Solar System targets. Spitzer's capabilities and first general results were presented at the January 2004 AAS meeting. In this poster, we focus on Spitzer's performance for moving targets, and the first Solar System results. Spitzer has three instruments, IRAC, IRS, and MIPS. IRAC (InfraRed Array Camera) provides simultaneous images at wavelengths of 3.6, 4.5, 5.8, and 8.0 microns. IRS (InfraRed Spectrograph) has 4 modules providing low-resolution (R=60-120) spectra from 5.3 to 40 microns, high-resolution (R=600) spectra from 10 to 37 m, and an autonomous target acquisition system (PeakUp) which includes small-field imaging at 15 m. MIPS (Multiband Imaging Photometer for SIRTF) does imaging photometry at 24, 70, and 160 m and low-resolution (R=15-25) spectroscopy (SED) between 55 and 96 microns. Guaranteed Time Observer (GTO) programs include the moons of the outer Solar System, Pluto, Centaurs, Kuiper Belt Objects, and comets

  10. Hubble Tarantula Treasury Project: Unraveling Tarantula's Web. I. Observational Overview and First Results

    NASA Technical Reports Server (NTRS)

    Sabbi, E.; Anderson, J.; Lennon, D. J.; van der Marel, R. P.; Aloisi, A.; Boyer, Martha L.; Cignoni, M.; De Marchi, G.; De Mink, S. E.; Evans, C. J.; hide

    2013-01-01

    The Hubble Tarantula Treasury Project (HTTP) is an ongoing panchromatic imaging survey of stellar populations in the Tarantula Nebula in the Large Magellanic Cloud that reaches into the sub-solar mass regime (<0.5 Stellar Mass). HTTP utilizes the capability of the Hubble Space Telescope to operate the Advanced Camera for Surveys and the Wide Field Camera 3 in parallel to study this remarkable region in the near-ultraviolet, optical, and near-infrared spectral regions, including narrow-band H(alpha) images. The combination of all these bands provides a unique multi-band view. The resulting maps of the stellar content of the Tarantula Nebula within its main body provide the basis for investigations of star formation in an environment resembling the extreme conditions found in starburst galaxies and in the early universe. Access to detailed properties of individual stars allows us to begin to reconstruct the temporal and spatial evolution of the stellar skeleton of the Tarantula Nebula over space and time on a sub-parsec scale. In this first paper we describe the observing strategy, the photometric techniques, and the upcoming data products from this survey and present preliminary results obtained from the analysis of the initial set of near-infrared observations.

  11. Focal plane arrays based on Type-II indium arsenide/gallium antimonide superlattices

    NASA Astrophysics Data System (ADS)

    Delaunay, Pierre-Yves

    The goal of this work is to demonstrate that Type-II InAs/GaSb superlattices can perform high quality infrared imaging from the middle (MWIR) to the long (LWIR) wavelength infrared range. Theoretically, focal plane arrays (FPAs) based on this technology could be operated at higher temperatures, with lower dark currents than the leading HgCdTe platform. This effort will focus on the fabrication of MWIR and LWIR FPAs with performance similar to existing infrared cameras. Some applications in the MWIR require fast, sensitive imagers able to sustain frame rates up to 100Hz. Such speed can only be achieved with photon detectors. However, these cameras need to be operated below 170K. Current research in this spectral band focuses on increasing the operating temperature of the FPA to a point where cooling could be performed with compact and reliable thermoelectric coolers. Type-II superlattice was used to demonstrate a camera that presented similar performance to HgCdTe and that could be operated up to room temperature. At 80K, the camera could detect temperature differences as low as 10 mK for an integration time shorter than 25 ms. In the LWIR, the electric performance of Type-II photodiodes is mainly limited by surface leakage. Aggressive processing steps such as hybridization and underfill can increase the dark current of the devices by several orders of magnitude. New cleaning and passivation techniques were used to reduce the dark current of FPA diodes by two orders of magnitudes. The absorbing GaSb substrate was also removed to increase the quantum efficiency of the devices up to 90%. At 80K, a FPA with a 9.6 microm 50%-cutoff in responsivity was able to detect temperature differences as low as 19 mK, only limited by the performance of the testing system. The non-uniformity in responsivity reached 3.8% for a 98.2% operability. The third generation of infrared cameras is based on multi-band imaging in order to improve the recognition capabilities of the imager. Preliminary detectors based on back to back diodes presented similar performance to single colors devices; the quantum efficiency was measured higher than 40% for both bands. Preliminary imaging results were demonstrated in the LWIR.

  12. Quaternion-Based Texture Analysis of Multiband Satellite Images: Application to the Estimation of Aboveground Biomass in the East Region of Cameroon.

    PubMed

    Djiongo Kenfack, Cedrigue Boris; Monga, Olivier; Mpong, Serge Moto; Ndoundam, René

    2018-03-01

    Within the last decade, several approaches using quaternion numbers to handle and model multiband images in a holistic manner were introduced. The quaternion Fourier transform can be efficiently used to model texture in multidimensional data such as color images. For practical application, multispectral satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. In this work, we propose a texture-color descriptor based on the quaternion Fourier transform to extract relevant information from multiband satellite images. We propose a new multiband image texture model extraction, called FOTO++, in order to address biomass estimation issues. The first stage consists in removing noise from the multispectral data while preserving the edges of canopies. Afterward, color texture descriptors are extracted thanks to a discrete form of the quaternion Fourier transform, and finally the support vector regression method is used to deduce biomass estimation from texture indices. Our texture features are modeled using a vector composed with the radial spectrum coming from the amplitude of the quaternion Fourier transform. We conduct several experiments in order to study the sensitivity of our model to acquisition parameters. We also assess its performance both on synthetic images and on real multispectral images of Cameroonian forest. The results show that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model (FOTO). Our scheme is also more accurate for aboveground biomass estimation. We stress that a similar methodology could be implemented using quaternion wavelets. These results highlight the potential of the quaternion-based approach to study multispectral satellite images.

  13. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  14. AMIE SMART-1: review of results and legacy 10 years after launch

    NASA Astrophysics Data System (ADS)

    Josset, Jean-Luc; Souchon, Audrey; Josset, Marie; Foing, Bernard

    2014-05-01

    The Advanced Moon micro-Imager Experiment (AMIE) camera was launched in September 2003 onboard the ESA SMART-1 spacecraft. We review the technical characteristics, scientific objectives and results of the instrument, 10 years after its launch. The AMIE camera is an ultra-compact imaging system that includes a tele-objective with a 5.3° x 5.3° field of view and an imaging sensor of 1024 x 1024 pixels. It is dedicated to spectral imaging with three spectral filters (750, 915 and 960 nm filters), photometric measurements (filter free CCD area), and Laser-link experiment (laser filter at 847 nm). The AMIE camera was designed to acquire high-resolution images of the lunar surface, in white light and for specific spectral bands, under a number of different viewing conditions and geometries. Specifically, its main scientific objectives included: (i) imaging of high latitude regions in the southern hemisphere, in particular the South Pole Aitken basin and the permanently shadowed regions close to the South Pole; (ii) determination of the photometric properties of the lunar surface from observations at different phase angles (physical properties of the regolith); (iii) multi-band imaging for constraining the chemical and mineral composition of the surface; (iv) detection and characterisation of lunar non-mare volcanic units; (v) study of lithological variations from impact craters and implications for crustal heterogeneity. The study of AMIE images enhanced the knowledge of the lunar surface, in particular regarding photometric modelling and surface physical properties of localized lunar areas and geological units. References: http://scholar.google.nl/scholar?q=smart-1+amie We acknowledge ESA, member states, industry and institutes for their contribution, and the members of the AMIE Team: J.-L. Josset, P. Plancke, Y. Langevin, P. Cerroni, M. C. De Sanctis, P. Pinet, S. Chevrel, S. Beauvivre, B.A. Hofmann, M. Josset, D. Koschny, M. Almeida, K. Muinonen, J. Piironen, M. A. Barucci, P. Ehrenfreund, Yu. Shkuratov, V. Shevchenko, Z. Sodnik, S. Mancuso, F. Ankersen, B.H. Foing, and other associated scientists, collaborators, students and colleagues.

  15. VizieR Online Data Catalog: YSO candidates in the Magellanic Bridge (Chen+, 2014)

    NASA Astrophysics Data System (ADS)

    Chen, C.-H. R.; Indebetouw, R.; Muller, E.; Kawamura, A.; Gordon, K. D.; Sewilo, M.; Whitney, B. A.; Fukui, Y.; Madden, S. C.; Meade, M. R.; Meixner, M.; Oliveira, J. M.; Robitaille, T. P.; Seale, J. P.; Shiao, B.; van Loon, J. Th.

    2017-06-01

    The Spitzer observations of the Bridge were obtained as part of the Legacy Program "Surveying the Agents of Galaxy Evolution in the Tidally-Stripped, Low-Metallicity Small Magellanic Cloud" (SAGE-SMC; Gordon et al. 2011AJ....142..102G). These observations included images taken at 3.6, 4.5, 5.8, and 8.0 um bands with the InfraRed Array Camera (IRAC) and at 24, 70, and 160 um bands with the Multiband Imaging Photometer for Spitzer (MIPS). The details of data processing are given in Gordon et al. (2011AJ....142..102G). To construct multi-wavelength SEDs for sources in the Spitzer catalog, we have expanded it by adding photometry from optical and NIR surveys covering the Bridge, i.e., BRI photometry from the Super COSMOS Sky Surveys (SSS; Hambly et al. 2001MNRAS.326.1279H) and JHKs photometry from the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006AJ....131.1163S, Cat. VII/233). (5 data files).

  16. Exoplanetary Photometry

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    2008-09-01

    The Spitzer Space Telescope measured the first photons from exoplanets (Charbonneau et al. 2005, Deming et al. 2005). These secondary eclipses (planet passing behind star) revealed the planet's emitted infrared flux, and under a blackbody assumption provide a brightness temperature in each measured bandpass. Since the initial direct detections, Spitzer has made numerous measurements in the four Infrared Array Camera bandpasses at 3.6, 4.5, 5.7, and 8.0 microns; the Infrared Spectrograph's Blue Peakup Array at 16 microns; and the Multiband Imaging Photometer for Spitzer's 24-micron array. Initial measurements of orbital variation and further photometric study (Harrington et al. 2006, 2007) revealed the extreme day-night variability of some exoplanets, but full orbital phase curves of different planets (Knutson et al. 2007, 2008) demonstrated that not all planets are so variable. This talk will review progress and prospects in exoplanetary photometry.

  17. Multiple Waveband Temperature Sensor (MWTS)

    NASA Technical Reports Server (NTRS)

    Bandara, Sumith V.; Gunapala, Sarath; Wilson, Daniel; Stirbl, Robert; Blea, Anthony; Harding, Gilbert

    2006-01-01

    This slide presentation reviews the development of Multiple Waveband Temperature Sensor (MWTS). The MWTS project will result in a highly stable, monolithically integrated, high resolution infrared detector array sensor that records registered thermal imagery in four infrared wavebands to infer dynamic temperature profiles on a laser-irradiated ground target. An accurate surface temperature measurement of a target in extreme environments in a non-intrusive manner is required. The development challenge is to: determine optimum wavebands (suitable for target temperatures, nature of the targets and environments) to measure accurate target surface temperature independent of the emissivity, integrate simultaneously readable multiband Quantum Well Infrared Photodetectors (QWIPs) in a single monolithic focal plane array (FPA) sensor and to integrate the hardware/software and system calibration for remote temperature measurements. The charge was therefore to develop and demonstrate a multiband infrared imaging camera with the detectors simultaneously sensitive to multiple distinct color bands for front surface temperature measurements Wavelength ( m) measurements. Amongst the requirements are: that the measurement system will not affect target dynamics or response to the laser irradiation and that the simplest criterion for spectral band selection is to choose those practically feasible spectral bands that create the most contrast between the objects or scenes of interest in the expected environmental conditions. There is in the presentation a review of the modeling and simulation of multi-wave infrared temperature measurement and also a review of the detector development and QWIP capacities.

  18. Multi-band morpho-Spectral Component Analysis Deblending Tool (MuSCADeT): Deblending colourful objects

    NASA Astrophysics Data System (ADS)

    Joseph, R.; Courbin, F.; Starck, J.-L.

    2016-05-01

    We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html

  19. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  20. MIPS - The Multiband Imaging Photometer for SIRTF

    NASA Technical Reports Server (NTRS)

    Rieke, G. H.; Lada, C.; Lebofsky, M.; Low, F.; Strittmatter, P.; Young, E.; Beichman, C.; Gautier, T. N.; Mould, J.; Werner, M.

    1986-01-01

    The Multiband Imaging Photometer System (MIPS) for SIRTF is to be designed to reach as closely as possible the fundamental sensitivity and angular resolution limits for SIRTF over the 3 to 700 microns spectral region. It will use high performance photoconductive detectors from 3 to 200 microns with integrating JFET amplifiers. From 200 to 700 microns, the MIPS will use a bolometer cooled by an adiabatic demagnetization refrigerator. Over much of its operating range, the MIPS will make possible observations at and beyond the conventional Rayleigh diffraction limit of angular resolution.

  1. VizieR Online Data Catalog: Antennae galaxies (NGC 4038/4039) revisited (Whitmore+, 2010)

    NASA Astrophysics Data System (ADS)

    Whitmore, B. C.; Chandar, R.; Schweizer, F.; Rothberg, B.; Leitherer, C.; Rieke, M.; Rieke, G.; Blair, W. P.; Mengel, S.; Alonso-Herrero, A.

    2012-06-01

    Observations of the main bodies of NGC 4038/39 were made with the Hubble Space Telescope (HST), using the ACS, as part of Program GO-10188. Multi-band photometry was obtained in the following optical broadband filters: F435W (~B), F550M (~V), and F814W (~I). Archival F336W photometry of the Antennae (Program GO-5962) was used to supplement our optical ACS/WFC observations. Infrared observations were made using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) camera on HST as part of Program GO-10188. Observations were made using the NIC2 camera with the F160W, F187N, and F237M filters, and the NIC3 camera with the F110W, F160W, F164W, F187N, and F222M filters. (10 data files).

  2. Corn and sorghum phenotyping using a fixed-wing UAV-based remote sensing system

    NASA Astrophysics Data System (ADS)

    Shi, Yeyin; Murray, Seth C.; Rooney, William L.; Valasek, John; Olsenholler, Jeff; Pugh, N. Ace; Henrickson, James; Bowden, Ezekiel; Zhang, Dongyan; Thomasson, J. Alex

    2016-05-01

    Recent development of unmanned aerial systems has created opportunities in automation of field-based high-throughput phenotyping by lowering flight operational cost and complexity and allowing flexible re-visit time and higher image resolution than satellite or manned airborne remote sensing. In this study, flights were conducted over corn and sorghum breeding trials in College Station, Texas, with a fixed-wing unmanned aerial vehicle (UAV) carrying two multispectral cameras and a high-resolution digital camera. The objectives were to establish the workflow and investigate the ability of UAV-based remote sensing for automating data collection of plant traits to develop genetic and physiological models. Most important among these traits were plant height and number of plants which are currently manually collected with high labor costs. Vegetation indices were calculated for each breeding cultivar from mosaicked and radiometrically calibrated multi-band imagery in order to be correlated with ground-measured plant heights, populations and yield across high genetic-diversity breeding cultivars. Growth curves were profiled with the aerial measured time-series height and vegetation index data. The next step of this study will be to investigate the correlations between aerial measurements and ground truth measured manually in field and from lab tests.

  3. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  4. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  5. Simultaneous Multislice Accelerated Free-Breathing Diffusion-Weighted Imaging of the Liver at 3T.

    PubMed

    Obele, Chika C; Glielmi, Christopher; Ream, Justin; Doshi, Ankur; Campbell, Naomi; Zhang, Hoi Cheung; Babb, James; Bhat, Himanshu; Chandarana, Hersh

    2015-10-01

    To perform image quality comparison between accelerated multiband diffusion acquisition (mb2-DWI) and conventional diffusion acquisition (c-DWI) in patients undergoing clinically indicated liver MRI. In this prospective study 22 consecutive patients undergoing clinically indicated liver MRI on a 3-T scanner equipped to perform multiband diffusion-weighed imaging (mb-DWI) were included. DWI was performed with single-shot spin-echo echo-planar technique with fat-suppression in free breathing with matching parameters when possible using c-DWI, mb-DWI, and multiband DWI with a twofold acceleration (mb2-DWI). These diffusion sequences were compared with respect to various parameters of image quality, lesion detectability, and liver ADC measurements. Accelerated mb2-DWI was 40.9% faster than c-DWI (88 vs. 149 s). Various image quality parameter scores were similar or higher on mb2-DWI when compared to c-DWI. The overall image quality score (averaged over the three readers) was significantly higher for mb-2 compared to c-DWI for b = 0 s/mm(2) (3.48 ± 0.52 vs. 3.21 ± 0.54; p = 0.001) and for b = 800 s/mm(2) (3.24 ± 0.76 vs. 3.06 ± 0.86; p = 0.010). Total of 25 hepatic lesions were visible on mb2-DWI and c-DWI, with identical lesion detectability. There was no significant difference in liver ADC between mb2-DWI and c-DWI (p = 0.12). Bland-Altman plot demonstrates lower mean liver ADC with mb2-DWI compared to c-DWI (by 0.043 × 10(-3) mm(2)/s or 3.7% of the average ADC). Multiband technique can be used to increase acquisition speed nearly twofold for free-breathing DWI of the liver with similar or improved overall image quality and similar lesion detectability compared to conventional DWI.

  6. SIRTF Tools for DIRT

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.

    2003-12-01

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF.

  7. VizieR Online Data Catalog: Young star forming region NGC 2264 Spitzer sources (Rapson+, 2014)

    NASA Astrophysics Data System (ADS)

    Rapson, V. A.; Pipher, J. L.; Gutermuth, R. A.; Megeath, S. T.; Allen, T. S.; Myers, P. C.; Allen, L. E.

    2017-05-01

    We utilize 3.6-8.0 um images of Mon OB1 East obtained with the Spitzer Space Telescope Infrared Array Camera (IRAC; Fazio et al. 2004ApJS..154...10F), 24 um images obtained with the Multi-Band Imaging Photometer (MIPS; Rieke et al. 2004ApJS..154...25R), along with 1-2.5 um NIR data from the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006AJ....131.1163S, Cat. VII/233) to classify YSOs. These YSOs in Mon OB1 East are classified as either protostars or stars with circumstellar disks by their infrared excess emission above photospheric emission. Spitzer data were gathered as part of two Guaranteed Time Observation programs and one additional program with the goal of studying clustered and distributed star formation throughout Mon OB1 East and comparing the results with those of other molecular clouds. Mon OB1 East was observed by Spitzer in 2004, 2007, and 2008 as part of the Guaranteed Time Observation programs 37 (IRAC data; PI: G. Fazio) and 58 (MIPS data; PI: G. Rieke), as well as program 40006 (IRAC+MIPS data; PI: G. Fazio). (1 data file).

  8. The Hyper Suprime-Cam SSP Survey: Overview and survey design

    NASA Astrophysics Data System (ADS)

    Aihara, Hiroaki; Arimoto, Nobuo; Armstrong, Robert; Arnouts, Stéphane; Bahcall, Neta A.; Bickerton, Steven; Bosch, James; Bundy, Kevin; Capak, Peter L.; Chan, James H. H.; Chiba, Masashi; Coupon, Jean; Egami, Eiichi; Enoki, Motohiro; Finet, Francois; Fujimori, Hiroki; Fujimoto, Seiji; Furusawa, Hisanori; Furusawa, Junko; Goto, Tomotsugu; Goulding, Andy; Greco, Johnny P.; Greene, Jenny E.; Gunn, James E.; Hamana, Takashi; Harikane, Yuichi; Hashimoto, Yasuhiro; Hattori, Takashi; Hayashi, Masao; Hayashi, Yusuke; Hełminiak, Krzysztof G.; Higuchi, Ryo; Hikage, Chiaki; Ho, Paul T. P.; Hsieh, Bau-Ching; Huang, Kuiyun; Huang, Song; Ikeda, Hiroyuki; Imanishi, Masatoshi; Inoue, Akio K.; Iwasawa, Kazushi; Iwata, Ikuru; Jaelani, Anton T.; Jian, Hung-Yu; Kamata, Yukiko; Karoji, Hiroshi; Kashikawa, Nobunari; Katayama, Nobuhiko; Kawanomoto, Satoshi; Kayo, Issha; Koda, Jin; Koike, Michitaro; Kojima, Takashi; Komiyama, Yutaka; Konno, Akira; Koshida, Shintaro; Koyama, Yusei; Kusakabe, Haruka; Leauthaud, Alexie; Lee, Chien-Hsiu; Lin, Lihwai; Lin, Yen-Ting; Lupton, Robert H.; Mandelbaum, Rachel; Matsuoka, Yoshiki; Medezinski, Elinor; Mineo, Sogo; Miyama, Shoken; Miyatake, Hironao; Miyazaki, Satoshi; Momose, Rieko; More, Anupreeta; More, Surhud; Moritani, Yuki; Moriya, Takashi J.; Morokuma, Tomoki; Mukae, Shiro; Murata, Ryoma; Murayama, Hitoshi; Nagao, Tohru; Nakata, Fumiaki; Niida, Mana; Niikura, Hiroko; Nishizawa, Atsushi J.; Obuchi, Yoshiyuki; Oguri, Masamune; Oishi, Yukie; Okabe, Nobuhiro; Okamoto, Sakurako; Okura, Yuki; Ono, Yoshiaki; Onodera, Masato; Onoue, Masafusa; Osato, Ken; Ouchi, Masami; Price, Paul A.; Pyo, Tae-Soo; Sako, Masao; Sawicki, Marcin; Shibuya, Takatoshi; Shimasaku, Kazuhiro; Shimono, Atsushi; Shirasaki, Masato; Silverman, John D.; Simet, Melanie; Speagle, Joshua; Spergel, David N.; Strauss, Michael A.; Sugahara, Yuma; Sugiyama, Naoshi; Suto, Yasushi; Suyu, Sherry H.; Suzuki, Nao; Tait, Philip J.; Takada, Masahiro; Takata, Tadafumi; Tamura, Naoyuki; Tanaka, Manobu M.; Tanaka, Masaomi; Tanaka, Masayuki; Tanaka, Yoko; Terai, Tsuyoshi; Terashima, Yuichi; Toba, Yoshiki; Tominaga, Nozomu; Toshikawa, Jun; Turner, Edwin L.; Uchida, Tomohisa; Uchiyama, Hisakazu; Umetsu, Keiichi; Uraguchi, Fumihiro; Urata, Yuji; Usuda, Tomonori; Utsumi, Yousuke; Wang, Shiang-Yu; Wang, Wei-Hao; Wong, Kenneth C.; Yabe, Kiyoto; Yamada, Yoshihiko; Yamanoi, Hitomi; Yasuda, Naoki; Yeh, Sherry; Yonehara, Atsunori; Yuma, Suraphong

    2018-01-01

    Hyper Suprime-Cam (HSC) is a wide-field imaging camera on the prime focus of the 8.2-m Subaru telescope on the summit of Mauna Kea in Hawaii. A team of scientists from Japan, Taiwan, and Princeton University is using HSC to carry out a 300-night multi-band imaging survey of the high-latitude sky. The survey includes three layers: the Wide layer will cover 1400 deg2 in five broad bands (grizy), with a 5 σ point-source depth of r ≈ 26. The Deep layer covers a total of 26 deg2 in four fields, going roughly a magnitude fainter, while the UltraDeep layer goes almost a magnitude fainter still in two pointings of HSC (a total of 3.5 deg2). Here we describe the instrument, the science goals of the survey, and the survey strategy and data processing. This paper serves as an introduction to a special issue of the Publications of the Astronomical Society of Japan, which includes a large number of technical and scientific papers describing results from the early phases of this survey.

  9. Young Stars Emerge from Orion Head

    NASA Image and Video Library

    2007-05-17

    This image from NASA's Spitzer Space Telescope shows infant stars "hatching" in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth. The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's "head," just north of the massive star Lambda Orionis. Wisps of green in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. Tints of orange-red in the cloud are dust particles warmed by the newly forming stars. The reddish-pink dots at the top of the cloud are very young stars embedded in a cocoon of cosmic gas and dust. Blue spots throughout the image are background Milky Way along this line of sight. This composite includes data from Spitzer's infrared array camera instrument, and multiband imaging photometer instrument. Light at 4.5 microns is shown as blue, 8.0 microns is green, and 24 microns is red. http://photojournal.jpl.nasa.gov/catalog/PIA09411

  10. Young Stars Emerge from Orion's Head

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This image from NASA's Spitzer Space Telescope shows infant stars 'hatching' in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth

    The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's 'head,' just north of the massive star Lambda Orionis.

    Wisps of green in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked.

    Tints of orange-red in the cloud are dust particles warmed by the newly forming stars. The reddish-pink dots at the top of the cloud are very young stars embedded in a cocoon of cosmic gas and dust. Blue spots throughout the image are background Milky Way along this line of sight.

    This composite includes data from Spitzer's infrared array camera instrument, and multiband imaging photometer instrument. Light at 4.5 microns is shown as blue, 8.0 microns is green, and 24 microns is red.

  11. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  12. Noise performance of the multiwavelength sub/millimeter inductance camera (MUSIC) detectors

    NASA Astrophysics Data System (ADS)

    Siegel, S. R.

    2015-07-01

    MUSIC is a multi-band imaging camera that employs 2304 Microwave Kinetic Inductance Detectors (MKIDs) in 576 spatial pixels to cover a 14 arc-minute field of view, with each pixel simultaneously sensitive to 4 bands centered at 0.87, 1.04, 1.33, and 1.98 mm. In April 2012 the MUSIC instrument was commissioned at the Caltech Submillimeter Observatory with a subset of the full focal plane. We examine the noise present in the detector timestreams during observations taken in the first year of operation. We find that fluctuations in atmospheric emission dominate at long timescales (< 0.5 Hz), and fluctuations in the amplitude and phase of the probe signal due to readout electronics contribute significant 1/f-type noise at shorter timescales. We describe a method to remove the amplitude, phase, and atmospheric noise using the fact that they are correlated among carrier tones. After removal, the complex signal is decomposed, or projected, into dissipation and frequency components. White noise from the cryogenic HEMT amplifier dominates in the dissipation component. An excess noise is observed in the frequency component that is likely due to fluctuations in two-level system (TLS) defects in the device substrate. We compare the amplitude of the TLS noise with previous measurements.

  13. Evaluation of nine-frame enhanced multiband photography San Andreas fault zone, Carrizo Plain, California

    NASA Technical Reports Server (NTRS)

    Wallace, R. E.

    1969-01-01

    Nine-frame multiband aerial photography of a sample area 4500 feet on a side was processed to enhance spectral contrasts. The area concerned is in the Carrizo Plain, 45 miles west of Bakersfield, California, in sec. 29, T 31 S., R. 21 E., as shown on the Panorama Hills quadrangle topographic map published by the U. S. Geological Survey. The accompany illustrations include an index map showing the location of the Carrizo Plain area; a geologic map of the area based on field studies and examination of black and white aerial photographs; an enhanced multiband aerial photograph; an Aero Ektachrome photograph; black and white aerial photographs; and infrared image in the 8-13 micron band.

  14. VizieR Online Data Catalog: KiDS-ESO-DR3 multi-band source catalog (de Jong+, 2017)

    NASA Astrophysics Data System (ADS)

    de Jong, J. T. A.; Verdoes Kleijn, G. A.; Erben, T.; Hildebrandt, H.; Kuijken, K.; Sikkema, G.; Brescia, M.; Bilicki, M.; Napolitano, N. R.; Amaro, V.; Begeman, K. G.; Boxhoorn, D. R.; Buddelmeijer, H.; Cavuoti, S.; Getman, F.; Grado, A.; Helmich, E.; Huang, Z.; Irisarri, N.; La Barbera, F.; Longo, G.; McFarland, J. P.; Nakajima, R.; Paolillo, M.; Puddu, E.; Radovich, M.; Rifatto, A.; Tortora, C; Valentijn, E. A.; Vellucci, C.; Vriend, W-J.; Amon, A.; Blake, C.; Choi, A.; Fenech, Conti I.; Herbonnet, R.; Heymans, C.; Hoekstra, H.; Klaes, D.; Merten, J.; Miller, L.; Schneider, P.; Viola, M.

    2017-04-01

    KiDS-ESO-DR3 contains a multi-band source catalogue encompassing all publicly released tiles, a total of 440 survey tiles including the coadded images, weight maps, masks and source lists of 292 survey tiles of KiDS-ESO-DR3, adding to the 148 tiles released previously (50 in KiDS-ESO-DR1 and 98 in KiDS-ESO-DR2). (1 data file).

  15. Kepler Supernova Remnant: A View from Spitzer Space Telescope

    NASA Image and Video Library

    2004-10-06

    This Spitzer false-color image is a composite of data from the 24 micron channel of Spitzer's multiband imaging photometer (red), and three channels of its infrared array camera: 8 micron (yellow), 5.6 micron (blue), and 4.8 micron (green). Stars are most prominent in the two shorter wavelengths, causing them to show up as turquoise. The supernova remnant is most prominent at 24 microns, arising from dust that has been heated by the supernova shock wave, and re-radiated in the infrared. The 8 micron data shows infrared emission from regions closely associated with the optically emitting regions. These are the densest regions being encountered by the shock wave, and probably arose from condensations in the surrounding material that was lost by the supernova star before it exploded. The composite above (PIA06908, PIA06909, and PIA06910) represent views of Kepler's supernova remnant taken in X-rays, visible light, and infrared radiation. Each top panel in the composite above shows the entire remnant. Each color in the composite represents a different region of the electromagnetic spectrum, from X-rays to infrared light. The X-ray and infrared data cannot be seen with the human eye. Astronomers have color-coded those data so they can be seen in these images. http://photojournal.jpl.nasa.gov/catalog/PIA06910

  16. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  17. Deep-learning derived features for lung nodule classification with limited datasets

    NASA Astrophysics Data System (ADS)

    Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.

    2018-02-01

    Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.

  18. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  19. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  20. ARC-1988-AC88-0595

    NASA Image and Video Library

    1988-10-07

    Artist: Rick Guidice SIRTF Artwork update - cutaway Space Infrared Telescope Facility's will orbit at 900 kilometers aboard a platform-type spacecraft, providing power, pointing, and communications to Earth. The telescope and its infrared instruments, will reside within a cylindrical cryogen tank. The hollow walls of the tank will contain the superfluid helium that cools the telescope to its operating temperature, a few degrees above absolute zero. SIRTF will carry three versatile instruments to analyze the radiation it collects, the Multiband Imaging Photometer, the Infrared Array Camera, and the Infrared Spectrograph. SIRTF long lifetime - 5 years or more - will permit astronomers of all disciplines to use the facililty to carry out a wide variety of astrophysical programs. It will provide ongoing coverage of variable objects, such as quasars, as well as the capability to study rare and transient events such as comets and supernovae. SIRTF's long lifetime will also allow it to distinguish nearby objects by detecting their gradual motions relative to the more distant background stars.

  1. A multi-sensor land mine detection system: hardware and architectural outline of the Australian RRAMNS CTD system

    NASA Astrophysics Data System (ADS)

    Abeynayake, Canicious; Chant, Ian; Kempinger, Siegfried; Rye, Alan

    2005-06-01

    The Rapid Route Area and Mine Neutralisation System (RRAMNS) Capability Technology Demonstrator (CTD) is a countermine detection project undertaken by DSTO and supported by the Australian Defence Force (ADF). The limited time and budget for this CTD resulted in some difficult strategic decisions with regard to hardware selection and system architecture. Although the delivered system has certain limitations arising from its experimental status, many lessons have been learned which illustrate a pragmatic path for future development. RRAMNS a similar sensor suite to other systems, in that three complementary sensors are included. These are Ground Probing Radar, Metal Detector Array, and multi-band electro-optic sensors. However, RRAMNS uses a unique imaging system and a network based real-time control and sensor fusion architecture. The relatively simple integration of each of these components could be the basis for a robust and cost-effective operational system. The RRAMNS imaging system consists of three cameras which cover the visible spectrum, the mid-wave and long-wave infrared region. This subsystem can be used separately as a scouting sensor. This paper describes the system at its mid-2004 status, when full integration of all detection components was achieved.

  2. VizieR Online Data Catalog: Candidate eruptive young stars in Lynds 1340 (Kun+, 2014)

    NASA Astrophysics Data System (ADS)

    Kun, M.; Apai, D.; O'Linger-Luscusk, J.; Moor, A.; Stecklum, B.; Szegedi-Elek, E.; Wolf-Chase, G.

    2016-07-01

    Lynds 1340 was observed by the Spitzer Space Telescope using the Infrared Array Camera (IRAC) on 2009 March 16 and the Multiband Imaging Photometer (MIPS) for Spitzer on 2008 November 26 (Prog. ID: 50691, PI: G. Fazio). The observations covered ~1deg2 in each band. We obtained low-resolution optical spectra for the star coinciding with IRAS 02224+7227 on 2003 February 5 using CAFOS with the G-100 grism on the 2.2m Telescope of the Calar Alto Observatory, and on 2004 December 11 using FAST on the 1.5m FLWO Telescope. High angular resolution JHK images, centered on the same star, were obtained on 2002 October 24 using the near-infrared camera Omega-Cass, mounted on the 3.5m Telescope of the Calar Alto Observatory. We performed a new search for Hα emission stars in L1340 using the Wide Field Grism Spectrograph 2 installed on the University of Hawaii 2.2m Telescope. We observed 2MASS J02263797+7304575 on 2011 October 16 and detected a Hα emission with EW(Hα)=-80Å in its spectrum. The Ks magnitude of 2MASS J02325605+7246055 was measured on the images obtained on 2010 October 18, during the monitoring program of V1180 Cas (Kun et al. 2011, J/ApJ/733/L8), using the MAGIC camera on the 2.2m Telescope of the Calar Alto Observatory. Narrow-band images through [SII] and Hα filters, as well as broad R-band images containing the environment of 2MASSJ02325605+7246055, were obtained with the Schmidt Telescope of the Thuringer Landessternwarte (TLS), Tautenburg in 2011 May, June, and September. Spectra of the nebula and the two brightest HH knots were obtained using the TLS medium-resolution Nasmyth spectrograph (R~700) in 2011 November. BVRCIC photometric observations of IRAS 02224+7227 were performed with the 1m Ritchey-Chretien-Coude (RCC) Telescope of the Konkoly Observatory at three epochs between 2001 and 2011. We measured the RC and IC magnitudes of IRAS 02224+7227 and 2MASSJ02263797+7304575 at several epochs between 2011 January and 2014 June on the images collected with the wide-field camera on the Schmidt Telescope of the Konkoly Observatory to monitor the light variations of V1180 Cas (Kun et al. 2011, J/ApJ/733/L8). L1340 is situated within Stripe 1260 of the SEGUE survey (Yanny et al. 2009, J/AJ/137/4377), thus its entire area was observed in the ugriz bands in 2005 November-December. Each target star has high-quality 3.4, 4.6, 12, and 22um fluxes in the AllWISE data base. (1 data file).

  3. Primordial environment of supermassive black holes. II. Deep Y- and J-band images around the z 6.3 quasar SDSS J1030+0524

    NASA Astrophysics Data System (ADS)

    Balmaverde, B.; Gilli, R.; Mignoli, M.; Bolzonella, M.; Brusa, M.; Cappelluti, N.; Comastri, A.; Sani, E.; Vanzella, E.; Vignali, C.; Vito, F.; Zamorani, G.

    2017-10-01

    Many cosmological studies predict that early supermassive black holes (SMBHs) can only form in the most massive dark matter halos embedded within large-scale structures marked by galaxy overdensities that may extend up to 10 physical Mpc. This scenario, however, has not been confirmed observationally, as the search for galaxy overdensities around high-z quasars has returned conflicting results. The field around the z = 6.31 quasar SDSSJ1030+0524 (J1030) is unique for multi-band coverage and represents an excellent data legacy for studying the environment around a primordial SMBH. In this paper we present wide-area ( 25' × 25') Y- and J-band imaging of the J1030 field obtained with the near infrared camera WIRCam at the Canada-France-Hawaii Telescope (CFHT). We built source catalogs in the Y- and J-band, and matched those with our photometric catalog in the r, z, and I bands presented in our previous paper and based on sources with zAB< 25.2 detected using z-band images from the the Large Binocular Cameras (LBC) at the Large Binocular Telescope (LBT) over the same field of view. We used these new infrared data together with H and K photometric measurements from the MUlti-wavelength Survey by Yale-Chile (MUSYC) and with the Spitzer Infrared Array Camera (IRAC) data to refine our selection of Lyman break galaxies (LBGs), extending our selection criteria to galaxies in the range 25.2 4σ. The overdensity value and its significance are higher than those found in our previous paper and we interpret this as evidence of an improved LBG selection.

  4. Lunar and Planetary Science XXXVI, Part 14

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Contents include the following: Destruction of Presolar Silicates by Aqueous Alteration Observed in Murchison CM2 Chondrite. Generation of Chondrule Forming Shock Waves in Solar Nebula by X-Ray Flares. TEM and NanoSIMS Study of Hydrated/Anhydrous Phase Mixed IDPs: Cometary or Asteroidal Origin? Inflight Calibration of Asteroid Multiband Imaging Camera Onboard Hayabusa: Preliminary Results. Corundum and Corundum-Hibonite Grains Discovered by Cathodoluminescence in the Matrix of Acfer 094 Meteorite. Spatial Extent of a Deep Moonquake Nest A Preliminary Report of Reexamination. Modal Abundances of Carbon in Ureilites: Implications for the Petrogenesis of Ureilites. Trapped Noble Gas Components and Exposure History of the Enstatite Chondrite ALH84206. Deep-seated Crustal Material in Dhofar Lunar Meteorites: Evidence from Pyroxene Chemistry. Numerical Investigations of Kuiper Belt Binaries. Dust Devils on Mars: Effects of Surface Roughness on Particle Threshold. Hecates Tholus, Mars: Nighttime Aeolian Activity Suggested by Thermal Images and Mesoscale Atmospheric Model Simulations. Are the Apollo 14 High-Al Basalts Really Impact Melts? Garnet in the Lunar Mantle: Further Evidence from Volcanic Glass Beads. The Earth/Mars Dichotomy in Mg/Si and Al/Si Ratios: Is It Real? Dissecting the Polar Asymmetry in the Non-Condensable Gas Enhancement on Mars: A Numerical Modeling Study. Cassini VIMS Preliminary Exploration of Titan s Surface Hemispheric Albedo Dichotomy. An Improved Instrument for Investigating Planetary Regolith Microstructure. Isotopic Composition of Oxygen in Lunar Zircons Preliminary Design of Visualization Tool for Hayabusa Operation. Size and Shape Distributions of Chondrules and Metal Grains Revealed by X-Ray Computed Tomography Data. Properties of Permanently Shadowed Regolith. Landslides in Interior Layered Deposits, Valles Marineris, Mars: Effects of Water and Ground Shaking on Slope Stability. Mars: Recent and Episodic Volcanic, Hydrothermal, and Glacial Activity Revealed by Mars Express High Resolution Stereo Camera (HRSC). The Cratering Record of the Saturnian Satellites Phoebe, Tethys, Dione and Iapetus in Comparison: First Results from Analysis of the Cassini ISS Imaging Data. Joint Crossover Solutions of Altimetry and Image Data on 433 Eros. The Martian Soil as a Geochemical Sink for.

  5. Remote sensing of vigor loss in conifers due to dwarf mistletoe

    NASA Technical Reports Server (NTRS)

    Meyer, M. P.; French, D. W.; Latham, R. P.; Nelson, C. A.; Douglass, R. W.

    1971-01-01

    The initial operation of a multiband/multidate tower-tramway test site in northeastern Minnesota for the development of specifications for subsequent multiband aerial photography of more extensive study areas was completed. Multiband/multidate configurations suggested by the tower-tramway studies were and will be flown with local equipment over the Togo test site. This site was photographed by the NASA RB57F aircraft in August and September 1971. It appears that, of all the film/filter combinations attempted to date (including optical recombining of several spectral band images via photo enhancement techniques), Ektachrome infrared film with a Wratten 12 filter is the best for detecting dwarf mistletoe, and other tree diseases as well. Using this film/filter combination, infection centers are easily detectable even on the smallest photo scale (1:100,000) obtained on the Togo site.

  6. Hubble and Spitzer Space Telescope Observations of the Debris Disk around the nearby K Dwarf HD 92945

    NASA Astrophysics Data System (ADS)

    Golimowski, D. A.; Krist, J. E.; Stapelfeldt, K. R.; Chen, C. H.; Ardila, D. R.; Bryden, G.; Clampin, M.; Ford, H. C.; Illingworth, G. D.; Plavchan, P.; Rieke, G. H.; Su, K. Y. L.

    2011-07-01

    We present the first resolved images of the debris disk around the nearby K dwarf HD 92945, obtained with the Hubble Space Telescope's (HST 's) Advanced Camera for Surveys. Our F606W (Broad V) and F814W (Broad I) coronagraphic images reveal an inclined, axisymmetric disk consisting of an inner ring about 2farcs0-3farcs0 (43-65 AU) from the star and an extended outer disk whose surface brightness declines slowly with increasing radius approximately 3farcs0-5farcs1 (65-110 AU) from the star. A precipitous drop in the surface brightness beyond 110 AU suggests that the outer disk is truncated at that distance. The radial surface-density profile is peaked at both the inner ring and the outer edge of the disk. The dust in the outer disk scatters neutrally but isotropically, and it has a low V-band albedo of 0.1. This combination of axisymmetry, ringed and extended morphology, and isotropic neutral scattering is unique among the 16 debris disks currently resolved in scattered light. We also present new infrared photometry and spectra of HD 92945 obtained with the Spitzer Space Telescope's Multiband Imaging Photometer and InfraRed Spectrograph. These data reveal no infrared excess from the disk shortward of 30 μm and constrain the width of the 70 μm source to lsim180 AU. Assuming that the dust comprises compact grains of astronomical silicate with a surface-density profile described by our scattered-light model of the disk, we successfully model the 24-350 μm emission with a minimum grain size of a min = 4.5 μm and a size distribution proportional to a -3.7 throughout the disk, but with maximum grain sizes of 900 μm in the inner ring and 50 μm in the outer disk. Together, our HST and Spitzer observations indicate a total dust mass of ~0.001M ⊕. However, our observations provide contradictory evidence of the dust's physical characteristics: its neutral V-I color and lack of 24 μm emission imply grains larger than a few microns, but its isotropic scattering and low albedo suggest a large population of submicron-sized grains. If grains smaller than a few microns are absent, then stellar radiation pressure may be the cause only if the dust is composed of highly absorptive materials like graphite. The dynamical causes of the sharply edged inner ring and outer disk are unclear, but recent models of dust creation and transport in the presence of migrating planets support the notion that the disk indicates an advanced state of planet formation around HD 92945. Based in part on guaranteed observing time awarded by the National Aeronautics and Space Administration (NASA) to the Advanced Camera for Surveys Investigation Definition Team and the Multiband Imaging Photometer for Spitzer Instrument Team.

  7. 3D Display Using Conjugated Multiband Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; White, Victor E.; Shcheglov, Kirill

    2012-01-01

    Stereoscopic display techniques are based on the principle of displaying two views, with a slightly different perspective, in such a way that the left eye views only by the left eye, and the right eye views only by the right eye. However, one of the major challenges in optical devices is crosstalk between the two channels. Crosstalk is due to the optical devices not completely blocking the wrong-side image, so the left eye sees a little bit of the right image and the right eye sees a little bit of the left image. This results in eyestrain and headaches. A pair of interference filters worn as an optical device can solve the problem. The device consists of a pair of multiband bandpass filters that are conjugated. The term "conjugated" describes the passband regions of one filter not overlapping with those of the other, but the regions are interdigitated. Along with the glasses, a 3D display produces colors composed of primary colors (basis for producing colors) having the spectral bands the same as the passbands of the filters. More specifically, the primary colors producing one viewpoint will be made up of the passbands of one filter, and those of the other viewpoint will be made up of the passbands of the conjugated filter. Thus, the primary colors of one filter would be seen by the eye that has the matching multiband filter. The inherent characteristic of the interference filter will allow little or no transmission of the wrong side of the stereoscopic images.

  8. Multiband selection with linear array detectors

    NASA Technical Reports Server (NTRS)

    Richard, H. L.; Barnes, W. L.

    1985-01-01

    Several techniques that can be used in an earth-imaging system to separate the linear image formed after the collecting optics into the desired spectral band are examined. The advantages and disadvantages of the Multispectral Linear Array (MLA) multiple optics, the MLA adjacent arrays, the imaging spectrometer, and the MLA beam splitter are discussed. The beam-splitter design approach utilizes, in addition to relatively broad spectral region separation, a movable Multiband Selection Device (MSD), placed between the exit ports of the beam splitter and a linear array detector, permitting many bands to be selected. The successful development and test of the MSD is described. The device demonstrated the capacity to provide a wide field of view, visible-to-near IR/short-wave IR and thermal IR capability, and a multiplicity of spectral bands and polarization measuring means, as well as a reasonable size and weight at minimal cost and risk compared to a spectrometer design approach.

  9. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  10. Development of an automated data acquisition and processing pipeline using multiple telescopes for observing transient phenomena

    NASA Astrophysics Data System (ADS)

    Savant, Vaibhav; Smith, Niall

    2016-07-01

    We report on the current status in the development of a pilot automated data acquisition and reduction pipeline based around the operation of two nodes of remotely operated robotic telescopes based in California, USA and Cork, Ireland. The observatories are primarily used as a testbed for automation and instrumentation and as a tool to facilitate STEM (Science Technology Engineering Mathematics) promotion. The Ireland node is situated at Blackrock Castle Observatory (operated by Cork Institute of Technology) and consists of two optical telescopes - 6" and 16" OTAs housed in two separate domes while the node in California is its 6" replica. Together they form a pilot Telescope ARrAy known as TARA. QuickPhot is an automated data reduction pipeline designed primarily to throw more light on the microvariability of blazars employing precision optical photometry and using data from the TARA telescopes as they constantly monitor predefined targets whenever observing conditions are favourable. After carrying out aperture photometry, if any variability above a given threshold is observed, the reporting telescope will communicate the source concerned and the other nodes will follow up with multi-band observations, taking advantage that they are located in strategically separated time-zones. Ultimately we wish to investigate the applicability of Shock-in-Jet and Geometric models. These try to explain the processes at work in AGNs which result in the formation of jets, by looking for temporal and spectral variability in TARA multi-band observations. We are also experimenting with using a Twochannel Optical PHotometric Imaging CAMera (TOΦCAM) that we have developed and which has been optimised for simultaneous two-band photometry on our 16" OTA.

  11. Poly-Pattern Compressive Segmentation of ASTER Data for GIS

    NASA Technical Reports Server (NTRS)

    Myers, Wayne; Warner, Eric; Tutwiler, Richard

    2007-01-01

    Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.

  12. Radiometric calibration of spacecraft using small lunar images

    USGS Publications Warehouse

    Kieffer, Hugh H.; Anderson, James M.; Becker, Kris J.

    1999-01-01

    In this study, the data reduction steps that can be used to extract the lunar irradiance from low resolution images of the Moon are examined and the attendant uncertainties are quantitatively assessed. The response integrated over an image is compared to a lunar irradiance model being developed from terrestrial multi-band photometric observations over the 350-2500 nm range.

  13. The Kaguya Mission Overview

    NASA Astrophysics Data System (ADS)

    Kato, Manabu; Sasaki, Susumu; Takizawa, Yoshisada

    2010-07-01

    The Japanese lunar orbiter Kaguya (SELENE) was successfully launched by an H2A rocket on September 14, 2007. On October 4, 2007, after passing through a phasing orbit 2.5 times around the Earth, Kaguya was inserted into a large elliptical orbit circling the Moon. After the apolune altitude was lowered, Kaguya reached its nominal 100 km circular polar observation orbit on October 19. During the process of realizing the nominal orbit, two subsatellites Okina (Rstar) and Ouna (Vstar) were released into elliptical orbits with 2400 km and 800 km apolune, respectively; both elliptical orbits had 100 km perilunes. After the functionality of bus system was verified, four radar antennas and a magnetometer boom were extended, and a plasma imager was deployed. Acquisition of scientific data was carried out for 10 months of nominal mission that began in mid-December 2007. During the 8-month extended mission, magnetic fields and gamma-rays from lower orbits were measured; in addition to this, low-altitude observations were carried out using a Terrain Camera, a Multiband Imager, and an HDTV camera. New data pertaining to an intense magnetic anomaly and GRS data with higher spatial resolution were acquired to study magnetism and the elemental distribution of the Moon. After some orbital maneuvers were performed by using the saved fuel, the Kaguya spacecraft finally impacted on the southeast part of the Moon. The Kaguya team has archived the initial science data, and since November 2, 2009, the data has been made available to public, and can be accessed at the Kaguya homepage of JAXA. The team continues to also study and publish initial results in international journals. Science purposes of the mission and onboard instruments including initial science results are described in this overview.

  14. Devastated Stellar Neighborhood

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image from NASA's Spitzer Space Telescope shows the nasty effects of living near a group of massive stars: radiation and winds from the massive stars (white spot in center) are blasting planet-making material away from stars like our sun. The planetary material can be seen as comet-like tails behind three stars near the center of the picture. The tails are pointing away from the massive stellar furnaces that are blowing them outward.

    The picture is the best example yet of multiple sun-like stars being stripped of their planet-making dust by massive stars.

    The sun-like stars are about two to three million years old, an age when planets are thought to be growing out of surrounding disks of dust and gas. Astronomers say the dust being blown from the stars is from their outer disks. This means that any Earth-like planets forming around the sun-like stars would be safe, while outer planets like Uranus might be nothing more than dust in the wind.

    This image shows a portion of the W5 star-forming region, located 6,500 light-years away in the constellation Cassiopeia. It is a composite of infrared data from Spitzer's infrared array camera and multiband imaging photometer. Light with a wavelength of 3.5 microns is blue, while light from the dust of 24 microns is orange-red.

  15. Laser-Sharp Jet Splits Water

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A jet of gas firing out of a very young star can be seen ramming into a wall of material in this infrared image from NASA's Spitzer Space Telescope.

    The young star, called HH 211-mm, is cloaked in dust and can't be seen. But streaming away from the star are bipolar jets, color-coded blue in this view. The pink blob at the end of the jet to the lower left shows where the jet is hitting a wall of material. The jet is hitting the wall so hard that shock waves are being generated, which causes ice to vaporize off dust grains. The shock waves are also heating material up, producing energetic ultraviolet radiation. The ultraviolet radiation then breaks the water vapor molecules apart.

    The red color at the end of the lower jet represents shock-heated iron, sulfur and dust, while the blue color in both jets denotes shock-heated hydrogen molecules.

    HH 211-mm is part of a cluster of about 300 stars, called IC 348, located 1,000 light-years away in the constellation Perseus.

    This image is a composite of infrared data from Spitzer's infrared array camera and its multiband imaging photometer. Light with wavelengths of 3.6 and 4.5 microns is blue; 8-micron-light is green; and 24-micron light is red.

  16. a Photogrammetric Pipeline for the 3d Reconstruction of Cassis Images on Board Exomars Tgo

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Mudric, T.; Pommerol, A.; Thomas, N.; Cremonese, G.

    2017-07-01

    CaSSIS (Colour and Stereo Surface Imaging System) is the stereo imaging system onboard the European Space Agency and ROSCOSMOS ExoMars Trace Gas Orbiter (TGO) that has been launched on 14 March 2016 and entered a Mars elliptical orbit on 19 October 2016. During the first bounded orbits, CaSSIS returned its first multiband images taken on 22 and 26 November 2016. The telescope acquired 11 images, each composed by 30 framelets, of the Martian surface near Hebes Chasma and Noctis Labyrithus regions reaching at closest approach at a distance of 250 km from the surface. Despite of the eccentricity of this first orbit, CaSSIS has provided one stereo pair with a mean ground resolution of 6 m from a mean distance of 520 km. The team at the Astronomical Observatory of Padova (OAPD-INAF) is involved into different stereo oriented missions and it is realizing a software for the generation of Digital Terrain Models from the CaSSIS images. The SW will be then adapted also for other projects involving stereo camera systems. To compute accurate 3D models, several sequential methods and tools have been developed. The preliminary pipeline provides: the generation of rectified images from the CaSSIS framelets, a matching core and post-processing methods. The software includes in particular: an automatic tie points detection by the Speeded Up Robust Features (SURF) operator, an initial search for the correspondences through Normalize Cross Correlation (NCC) algorithm and the Adaptive Least Square Matching (LSM) algorithm in a hierarchical approach. This work will show a preliminary DTM generated by the first CaSSIS stereo images.

  17. Evaluation of slice accelerations using multiband echo planar imaging at 3 Tesla

    PubMed Central

    Xu, Junqian; Moeller, Steen; Auerbach, Edward J.; Strupp, John; Smith, Stephen M.; Feinberg, David A.; Yacoub, Essa; Uğurbil, Kâmil

    2013-01-01

    We evaluate residual aliasing among simultaneously excited and acquired slices in slice accelerated multiband (MB) echo planar imaging (EPI). No in-plane accelerations were used in order to maximize and evaluate achievable slice acceleration factors at 3 Tesla. We propose a novel leakage (L-) factor to quantify the effects of signal leakage between simultaneously acquired slices. With a standard 32-channel receiver coil at 3 Tesla, we demonstrate that slice acceleration factors of up to eight (MB = 8) with blipped controlled aliasing in parallel imaging (CAIPI), in the absence of in-plane accelerations, can be used routinely with acceptable image quality and integrity for whole brain imaging. Spectral analyses of single-shot fMRI time series demonstrate that temporal fluctuations due to both neuronal and physiological sources were distinguishable and comparable up to slice-acceleration factors of nine (MB = 9). The increased temporal efficiency could be employed to achieve, within a given acquisition period, higher spatial resolution, increased fMRI statistical power, multiple TEs, faster sampling of temporal events in a resting state fMRI time series, increased sampling of q-space in diffusion imaging, or more quiet time during a scan. PMID:23899722

  18. A SPITZER VIEW OF STAR FORMATION IN THE CYGNUS X NORTH COMPLEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beerer, I. M.; Koenig, X. P.; Hora, J. L.

    2010-09-01

    We present new images and photometry of the massive star-forming complex Cygnus X obtained with the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer for Spitzer (MIPS) on board the Spitzer Space Telescope. A combination of IRAC, MIPS, UKIRT Deep Infrared Sky Survey, and Two Micron All Sky Survey data are used to identify and classify young stellar objects (YSOs). Of the 8231 sources detected exhibiting infrared excess in Cygnus X North, 670 are classified as class I and 7249 are classified as class II. Using spectra from the FAST Spectrograph at the Fred L. Whipple Observatory and Hectospecmore » on the MMT, we spectrally typed 536 sources in the Cygnus X complex to identify the massive stars. We find that YSOs tend to be grouped in the neighborhoods of massive B stars (spectral types B0 to B9). We present a minimal spanning tree analysis of clusters in two regions in Cygnus X North. The fraction of infrared excess sources that belong to clusters with {>=}10 members is found to be 50%-70%. Most class II objects lie in dense clusters within blown out H II regions, while class I sources tend to reside in more filamentary structures along the bright-rimmed clouds, indicating possible triggered star formation.« less

  19. VizieR Online Data Catalog: Star clusters automatically detected in the LMC (Bitsakis+, 2017)

    NASA Astrophysics Data System (ADS)

    Bitsakis, T.; Bonfini, P.; Gonzalez-Lopezlira, R. A.; Ramirez-Siordia, V. H.; Bruzual, G.; Charlot, S.; Maravelias, G.; Zaritsky, D.

    2018-03-01

    The archival data used in this work were acquired from several diverse large surveys, which mapped the Magellanic Clouds at various bands. Simons+ (2014AdSpR..53..939S) composed a mosaic using archival data from the Galaxy Evolution Explorer (GALEX) at the near-ultraviolet (NUV) band (λeff=2275Å). The mosaic covers an area of 15deg2 on the LMC. the central ~3x1deg2 of the LMC (the bar-region) was later observed by the Swift Ultraviolet-Optical Telescope (UVOT) Magellanic Clouds Survey (SUMAC; Siegel+ 2014AJ....148..131S). The optical data used here are from the Magellanic Cloud Photometric Survey (MCPS; Zaritsky+ 2004, J/AJ/128/1606). These authors observed the central 64deg2 of the LMC with 3.8-5.2 minute exposures at the Johnson U, B, V, and Gunn i filters of the Las Campanas Swope Telescope. Meixner+ (2006, J/AJ/132/2268) performed a uniform and unbiased imaging survey of the LMC (called Surveying the Agents of a Galaxy's Evolution, or SAGE), covering the central 7deg2 with both the Infrared Array Camera (IRAC) and the Multiband Imaging Photometer (MIPS) on-board the Spitzer Space Telescope. (1 data file).

  20. An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science

    NASA Astrophysics Data System (ADS)

    Nyland, Kristina; Lacy, Mark; Sajina, Anna; Pforr, Janine; Farrah, Duncan; Wilson, Gillian; Surace, Jason; Häußler, Boris; Vaccari, Mattia; Jarvis, Matt

    2017-05-01

    We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 μm over five well-studied deep fields spanning 18 deg2. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z ≈ 5. To accomplish this, we are using The Tractor to perform “forced photometry.” This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic Observations survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square-degree test region within the XMM Large Scale Structure field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including (1) consistent source cross-identification between bands, (2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower resolution SERVS data, (3) a higher source detection fraction in each band, (4) a larger number of candidate galaxies in the redshift range 5 < z < 6, and (5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.

  1. Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing

    PubMed Central

    Wang, Zhaojun; Lei, Ming; Yao, Baoli; Cai, Yanan; Liang, Yansheng; Yang, Yanlong; Yang, Xibin; Li, Hui; Xiong, Daxi

    2015-01-01

    Autofocusing is a routine technique in redressing focus drift that occurs in time-lapse microscopic image acquisition. To date, most automatic microscopes are designed on the distance detection scheme to fulfill the autofocusing operation, which may suffer from the low contrast of the reflected signal due to the refractive index mismatch at the water/glass interface. To achieve high autofocusing speed with minimal motion artifacts, we developed a compact multi-band fluorescent microscope with an electrically tunable lens (ETL) device for autofocusing. A modified searching algorithm based on equidistant scanning and curve fitting is proposed, which no longer requires a single-peak focus curve and then efficiently restrains the impact of external disturbance. This technique enables us to achieve an autofocusing time of down to 170 ms and the reproductivity of over 97%. The imaging head of the microscope has dimensions of 12 cm × 12 cm × 6 cm. This portable instrument can easily fit inside standard incubators for real-time imaging of living specimens. PMID:26601001

  2. scarlet: Source separation in multi-band images by Constrained Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert

    2018-03-01

    SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

  3. Interleaved EPI based fMRI improved by multiplexed sensitivity encoding (MUSE) and simultaneous multi-band imaging.

    PubMed

    Chang, Hing-Chiu; Gaur, Pooja; Chou, Ying-hui; Chu, Mei-Lan; Chen, Nan-kuei

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive and powerful imaging tool for detecting brain activities. The majority of fMRI studies are performed with single-shot echo-planar imaging (EPI) due to its high temporal resolution. Recent studies have demonstrated that, by increasing the spatial-resolution of fMRI, previously unidentified neuronal networks can be measured. However, it is challenging to improve the spatial resolution of conventional single-shot EPI based fMRI. Although multi-shot interleaved EPI is superior to single-shot EPI in terms of the improved spatial-resolution, reduced geometric distortions, and sharper point spread function (PSF), interleaved EPI based fMRI has two main limitations: 1) the imaging throughput is lower in interleaved EPI; 2) the magnitude and phase signal variations among EPI segments (due to physiological noise, subject motion, and B0 drift) are translated to significant in-plane aliasing artifact across the field of view (FOV). Here we report a method that integrates multiple approaches to address the technical limitations of interleaved EPI-based fMRI. Firstly, the multiplexed sensitivity-encoding (MUSE) post-processing algorithm is used to suppress in-plane aliasing artifacts resulting from time-domain signal instabilities during dynamic scans. Secondly, a simultaneous multi-band interleaved EPI pulse sequence, with a controlled aliasing scheme incorporated, is implemented to increase the imaging throughput. Thirdly, the MUSE algorithm is then generalized to accommodate fMRI data obtained with our multi-band interleaved EPI pulse sequence, suppressing both in-plane and through-plane aliasing artifacts. The blood-oxygenation-level-dependent (BOLD) signal detectability and the scan throughput can be significantly improved for interleaved EPI-based fMRI. Our human fMRI data obtained from 3 Tesla systems demonstrate the effectiveness of the developed methods. It is expected that future fMRI studies requiring high spatial-resolvability and fidelity will largely benefit from the reported techniques.

  4. Enhanced vibrational spectroscopy, intracellular refractive indexing for label-free biosensing and bioimaging by multiband plasmonic-antenna array.

    PubMed

    Chen, Cheng-Kuang; Chang, Ming-Hsuan; Wu, Hsieh-Ting; Lee, Yao-Chang; Yen, Ta-Jen

    2014-10-15

    In this study, we report a multiband plasmonic-antenna array that bridges optical biosensing and intracellular bioimaging without requiring a labeling process or coupler. First, a compact plasmonic-antenna array is designed exhibiting a bandwidth of several octaves for use in both multi-band plasmonic resonance-enhanced vibrational spectroscopy and refractive index probing. Second, a single-element plasmonic antenna can be used as a multifunctional sensing pixel that enables mapping the distribution of targets in thin films and biological specimens by enhancing the signals of vibrational signatures and sensing the refractive index contrast. Finally, using the fabricated plasmonic-antenna array yielded reliable intracellular observation was demonstrated from the vibrational signatures and intracellular refractive index contrast requiring neither labeling nor a coupler. These unique features enable the plasmonic-antenna array to function in a label-free manner, facilitating bio-sensing and imaging development. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Ground state, collective mode, phase soliton and vortex in multiband superconductors.

    PubMed

    Lin, Shi-Zeng

    2014-12-10

    This article reviews theoretical and experimental work on the novel physics in multiband superconductors. Multiband superconductors are characterized by multiple superconducting energy gaps in different bands with interaction between Cooper pairs in these bands. The discovery of prominent multiband superconductors MgB2 and later iron-based superconductors, has triggered enormous interest in multiband superconductors. The most recently discovered superconductors exhibit multiband features. The multiband superconductors possess novel properties that are not shared with their single-band counterpart. Examples include: the time-reversal symmetry broken state in multiband superconductors with frustrated interband couplings; the collective oscillation of number of Cooper pairs between different bands, known as the Leggett mode; and the phase soliton and fractional vortex, which are the main focus of this review. This review presents a survey of a wide range of theoretical exploratory and experimental investigations of novel physics in multiband superconductors. A vast amount of information derived from these studies is shown to highlight unusual and unique properties of multiband superconductors and to reveal the challenges and opportunities in the research on the multiband superconductivity.

  6. Stellar Populations of Lyα Emitters at z ~ 6-7: Constraints on the Escape Fraction of Ionizing Photons from Galaxy Building Blocks

    NASA Astrophysics Data System (ADS)

    Ono, Yoshiaki; Ouchi, Masami; Shimasaku, Kazuhiro; Dunlop, James; Farrah, Duncan; McLure, Ross; Okamura, Sadanori

    2010-12-01

    We investigate the stellar populations of Lyα emitters (LAEs) at z = 5.7 and 6.6 in a 0.65 deg2 sky of the Subaru/XMM-Newton Deep Survey (SXDS) Field, using deep images taken with the Subaru/Suprime-Cam, United Kingdom Infrared Telescope/Wide Field Infrared Camera, and Spitzer/Infrared Array Camera (IRAC). We produce stacked multiband images at each redshift from 165 (z = 5.7) and 91 (z = 6.6) IRAC-undetected objects to derive typical spectral energy distributions (SEDs) of z ~ 6-7 LAEs for the first time. The stacked LAEs have as blue UV continua as the Hubble Space Telescope (HST)/Wide Field Camera 3 (WFC3) z-dropout galaxies of similar M UV, with a spectral slope β ~ -3, but at the same time they have red UV-to-optical colors with detection in the 3.6 μm band. Using SED fitting we find that the stacked LAEs have low stellar masses of ~(3-10) × 107 M sun, very young ages of ~1-3 Myr, negligible dust extinction, and strong nebular emission from the ionized interstellar medium, although the z = 6.6 object is fitted similarly well with high-mass models without nebular emission; inclusion of nebular emission reproduces the red UV-to-optical colors while keeping the UV colors sufficiently blue. We infer that typical LAEs at z ~ 6-7 are building blocks of galaxies seen at lower redshifts. We find a tentative decrease in the Lyα escape fraction from z = 5.7 to 6.6, which may imply an increase in the intergalactic medium neutral fraction. From the minimum contribution of nebular emission required to fit the observed SEDs, we place an upper limit on the escape fraction of ionizing photons of f ion esc ~ 0.6 at z = 5.7 and ~0.9 at z = 6.6. We also compare the stellar populations of our LAEs with those of stacked HST/WFC3 z-dropout galaxies. Based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  7. Rover exploration on the lunar surface; a science proposal for SELENE-B mission

    NASA Astrophysics Data System (ADS)

    Sasaki, S.; Kubota, T.; Akiyama, H.; Hirata, N.; Kunii, Y.; Matsumoto, K.; Okada, T.; Otake, M.; Saiki, K.; Sugihara, T.

    LUNARSURFACE:ASCIENCES. Sasaki (1), T. Kubota (2) , H. Akiyama (1) , N. Hirata (3), Y. Kunii (4), K. Matsumoto (5), T. Okada (2), M. Otake (3), K. Saiki (6), T. Sugihara (3) (1) Department of Earth and Planetary Science, Univ. Tokyo, (2) Institute of Space and Astronautical Sciences, (3) National Space Development Agency of Japan, (4) Department of Electrical and Electronic Engineering, Chuo Univ., (5) National Aerospace Laboratory of Japan, (6) Research Institute of Materials and Resources, Akita Univ. sho@eps.s.u -tokyo.ac.jp/Fax:+81-3-5841-4569 A new lunar landing mission (SELENE-B) is now in consideration in Japan. Scientific investigation plans using a rover are proposed. To clarify the origin and evolution of the moon, the early crustal formation and later mare volcanic processes are still unveiled. We proposed two geological investigation plans: exploration of a crater central peak to discover subsurface materials and exploration of dome-cone structures on young mare region. We propose multi-band macro/micro camera using AOTF, X-ray spectrometer/diffractometer and gamma ray spectrometer. Since observation of rock fragments in brecciaed rocks is necessary, the rover should have cutting or scraping mechanism of rocks. In our current scenario, landing should be performed about 500m from the main target (foot of a crater central peak or a cone/dome). After the spectral survey by multi-band camera on the lander, the rover should be deployed for geological investigation. The rover should make a short (a few tens meter) round trip at first, then it should perform traverse observation toward the main target. Some technological investigations on SELENE-B project will be also presented.

  8. The SED Machine: A Robotic Spectrograph for Fast Transient Classification

    NASA Astrophysics Data System (ADS)

    Blagorodnova, Nadejda; Neill, James D.; Walters, Richard; Kulkarni, Shrinivas R.; Fremling, Christoffer; Ben-Ami, Sagi; Dekany, Richard G.; Fucik, Jason R.; Konidaris, Nick; Nash, Reston; Ngeow, Chow-Choong; Ofek, Eran O.; O’ Sullivan, Donal; Quimby, Robert; Ritter, Andreas; Vyhmeister, Karl E.

    2018-03-01

    Current time domain facilities are finding several hundreds of transient astronomical events a year. The discovery rate is expected to increase in the future as soon as new surveys such as the Zwicky Transient Facility (ZTF) and the Large Synoptic Sky Survey (LSST) come online. Presently, the rate at which transients are classified is approximately one order or magnitude lower than the discovery rate, leading to an increasing “follow-up drought”. Existing telescopes with moderate aperture can help address this deficit when equipped with spectrographs optimized for spectral classification. Here, we provide an overview of the design, operations and first results of the Spectral Energy Distribution Machine (SEDM), operating on the Palomar 60-inch telescope (P60). The instrument is optimized for classification and high observing efficiency. It combines a low-resolution (R ∼ 100) integral field unit (IFU) spectrograph with “Rainbow Camera” (RC), a multi-band field acquisition camera which also serves as multi-band (ugri) photometer. The SEDM was commissioned during the operation of the intermediate Palomar Transient Factory (iPTF) and has already lived up to its promise. The success of the SEDM demonstrates the value of spectrographs optimized for spectral classification.

  9. Geologic studies of Yellowstone National Park imagery using an electronic image enhancement system

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1970-01-01

    The image enhancement system is described, as well as the kinds of enhancement attained. Results were obtained from various kinds of remote sensing imagery (mainly black and white multiband, color, color infrared, thermal infrared, and side-looking K-band radar) of parts of Yellowstone National Park. Possible additional fields of application of these techniques are considered.

  10. Collation of earth resources data collected by ERIM airborne sensors

    NASA Technical Reports Server (NTRS)

    Hasell, P. G., Jr.

    1975-01-01

    Earth resources imagery from nine years of data collection with developmental airborne sensors is cataloged for reference. The imaging sensors include single and multiband line scanners and side-looking radars. The operating wavelengths of the sensors include ultraviolet, visible and infrared band scanners, and X- and L-band radar. Imagery from all bands (radar and scanner) were collected at some sites and many sites had repeated coverage. The multiband scanner data was radiometrically calibrated. Illustrations show how the data can be used in earth resource investigations. References are made to published reports which have made use of the data in completed investigations. Data collection sponsors are identified and a procedure described for gaining access to the data.

  11. Serendipitous discovery of an infrared bow shock near PSR J1549–4848 with Spitzer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhongxiang; Kaplan, David L.; Slane, Patrick

    2013-06-01

    We report on the discovery of an infrared cometary nebula around PSR J1549–4848 in our Spitzer survey of a few middle-aged radio pulsars. Following the discovery, multi-wavelength imaging and spectroscopic observations of the nebula were carried out. We detected the nebula in Spitzer Infrared Array Camera 8.0, Multiband Imaging Photometer for Spitzer 24 and 70 μm imaging, and in Spitzer IRS 7.5-14.4 μm spectroscopic observations, and also in the Wide-field Infrared Survey Explorer all-sky survey at 12 and 22 μm. These data were analyzed in detail, and we find that the nebula can be described with a standard bow shockmore » shape, and that its spectrum contains polycyclic aromatic hydrocarbon and H{sub 2} emission features. However, it is not certain which object drives the nebula. We analyze the field stars and conclude that none of them can be the associated object because stars with a strong wind or mass ejection that usually produce bow shocks are much brighter than the field stars. The pulsar is approximately 15'' away from the region in which the associated object is expected to be located. In order to resolve the discrepancy, we suggest that a highly collimated wind could be emitted from the pulsar and produce the bow shock. X-ray imaging to detect the interaction of the wind with the ambient medium- and high-spatial resolution radio imaging to determine the proper motion of the pulsar should be carried out, which will help verify the association of the pulsar with the bow shock nebula.« less

  12. An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nyland, Kristina; Lacy, Mark; Sajina, Anna

    We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 μ m over five well-studied deep fields spanning 18 deg{sup 2}. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z  ≈ 5. To accomplish this, we are using The Tractor to perform “forced photometry.” This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from themore » VISTA Deep Extragalactic Observations survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square-degree test region within the XMM Large Scale Structure field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including (1) consistent source cross-identification between bands, (2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower resolution SERVS data, (3) a higher source detection fraction in each band, (4) a larger number of candidate galaxies in the redshift range 5 <  z  < 6, and (5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.« less

  13. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.

  14. Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity.

    PubMed

    Cohen, Alexander D; Nencka, Andrew S; Lebel, R Marc; Wang, Yang

    2017-01-01

    A novel sequence has been introduced that combines multiband imaging with a multi-echo acquisition for simultaneous high spatial resolution pseudo-continuous arterial spin labeling (ASL) and blood-oxygenation-level dependent (BOLD) echo-planar imaging (MBME ASL/BOLD). Resting-state connectivity in healthy adult subjects was assessed using this sequence. Four echoes were acquired with a multiband acceleration of four, in order to increase spatial resolution, shorten repetition time, and reduce slice-timing effects on the ASL signal. In addition, by acquiring four echoes, advanced multi-echo independent component analysis (ME-ICA) denoising could be employed to increase the signal-to-noise ratio (SNR) and BOLD sensitivity. Seed-based and dual-regression approaches were utilized to analyze functional connectivity. Cerebral blood flow (CBF) and BOLD coupling was also evaluated by correlating the perfusion-weighted timeseries with the BOLD timeseries. These metrics were compared between single echo (E2), multi-echo combined (MEC), multi-echo combined and denoised (MECDN), and perfusion-weighted (PW) timeseries. Temporal SNR increased for the MECDN data compared to the MEC and E2 data. Connectivity also increased, in terms of correlation strength and network size, for the MECDN compared to the MEC and E2 datasets. CBF and BOLD coupling was increased in major resting-state networks, and that correlation was strongest for the MECDN datasets. These results indicate our novel MBME ASL/BOLD sequence, which collects simultaneous high-resolution ASL/BOLD data, could be a powerful tool for detecting functional connectivity and dynamic neurovascular coupling during the resting state. The collection of more than two echoes facilitates the use of ME-ICA denoising to greatly improve the quality of resting state functional connectivity MRI.

  15. SIOUX project: a simultaneous multiband camera for exoplanet atmospheres studies

    NASA Astrophysics Data System (ADS)

    Christille, Jean Marc; Bonomo, Aldo Stefano; Borsa, Francesco; Busonero, Deborah; Calcidese, Paolo; Claudi, Riccardo; Damasso, Mario; Giacobbe, Paolo; Molinari, Emilio; Pace, Emanuele; Riva, Alberto; Sozzetti, Alesandro; Toso, Giorgio; Tresoldi, Daniela

    2016-08-01

    The exoplanet revolution is well underway. The last decade has seen order-of-magnitude increases in the number of known planets beyond the Solar system. Detailed characterization of exoplanetary atmospheres provide the best means for distinguishing the makeup of their outer layers, and the only hope for understanding the interplay between initial composition chemistry, temperature-pressure atmospheric profiles, dynamics and circulation. While pioneering work on the observational side has produced the first important detections of atmospheric molecules for the class of transiting exoplanets, important limitations are still present due to the lack of systematic, repeated measurements with optimized instrumentation at both visible (VIS) and near-infrared (NIR) wavelengths. It is thus of fundamental importance to explore quantitatively possible avenues for improvements. In this paper we report initial results of a feasibility study for the prototype of a versatile multi-band imaging system for very high-precision differential photometry that exploits the choice of specifically selected narrow-band filters and novel ideas for the execution of simultaneous VIS and NIR measurements. Starting from the fundamental system requirements driven by the science case at hand, we describe a set of three opto-mechanical solutions for the instrument prototype: 1) a radial distribution of the optical flux using dichroic filters for the wavelength separation and narrow-band filters or liquid crystal filters for the observations; 2) a tree distribution of the optical flux (implying 2 separate foci), with the same technique used for the beam separation and filtering; 3) an 'exotic' solution consisting of the study of a complete optical system (i.e. a brand new telescope) that exploits the chromatic errors of a reflecting surface for directing the different wavelengths at different foci. In this paper we present the first results of the study phase for the three solutions, as well as the results of two laboratory prototypes (related to the first two options), that simulate the most critical aspects of the future instrument.

  16. Autographic theme extraction

    USGS Publications Warehouse

    Edson, D.; Colvocoresses, Alden P.

    1973-01-01

    Remote-sensor images, including aerial and space photographs, are generally recorded on film, where the differences in density create the image of the scene. With panchromatic and multiband systems the density differences are recorded in shades of gray. On color or color infrared film, with the emulsion containing dyes sensitive to different wavelengths, a color image is created by a combination of color densities. The colors, however, can be separated by filtering or other techniques, and the color image reduced to monochromatic images in which each of the separated bands is recorded as a function of the gray scale.

  17. The Not So Simple Globular Cluster ω Cen. I. Spatial Distribution of the Multiple Stellar Populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calamida, A.; Saha, A.; Strampelli, G.

    2017-04-01

    We present a multi-band photometric catalog of ≈1.7 million cluster members for a field of view of ≈2° × 2° across ω Cen. Photometry is based on images collected with the Dark Energy Camera on the 4 m Blanco telescope and the Advanced Camera for Surveys on the Hubble Space Telescope . The unprecedented photometric accuracy and field coverage allowed us, for the first time, to investigate the spatial distribution of ω Cen multiple populations from the core to the tidal radius, confirming its very complex structure. We found that the frequency of blue main-sequence stars is increasing compared to red main-sequencemore » stars starting from a distance of ≈25′ from the cluster center. Blue main-sequence stars also show a clumpy spatial distribution, with an excess in the northeast quadrant of the cluster pointing toward the direction of the Galactic center. Stars belonging to the reddest and faintest red-giant branch also show a more extended spatial distribution in the outskirts of ω Cen, a region never explored before. Both these stellar sub-populations, according to spectroscopic measurements, are more metal-rich compared to the cluster main stellar population. These findings, once confirmed, make ω Cen the only stellar system currently known where metal-rich stars have a more extended spatial distribution compared to metal-poor stars. Kinematic and chemical abundance measurements are now needed for stars in the external regions of ω Cen to better characterize the properties of these sub-populations.« less

  18. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  19. The Not So Simple Globular Cluster ω Cen. I. Spatial Distribution of the Multiple Stellar Populations

    NASA Astrophysics Data System (ADS)

    Calamida, A.; Strampelli, G.; Rest, A.; Bono, G.; Ferraro, I.; Saha, A.; Iannicola, G.; Scolnic, D.; James, D.; Smith, C.; Zenteno, A.

    2017-04-01

    We present a multi-band photometric catalog of ≈1.7 million cluster members for a field of view of ≈2° × 2° across ω Cen. Photometry is based on images collected with the Dark Energy Camera on the 4 m Blanco telescope and the Advanced Camera for Surveys on the Hubble Space Telescope. The unprecedented photometric accuracy and field coverage allowed us, for the first time, to investigate the spatial distribution of ω Cen multiple populations from the core to the tidal radius, confirming its very complex structure. We found that the frequency of blue main-sequence stars is increasing compared to red main-sequence stars starting from a distance of ≈25‧ from the cluster center. Blue main-sequence stars also show a clumpy spatial distribution, with an excess in the northeast quadrant of the cluster pointing toward the direction of the Galactic center. Stars belonging to the reddest and faintest red-giant branch also show a more extended spatial distribution in the outskirts of ω Cen, a region never explored before. Both these stellar sub-populations, according to spectroscopic measurements, are more metal-rich compared to the cluster main stellar population. These findings, once confirmed, make ω Cen the only stellar system currently known where metal-rich stars have a more extended spatial distribution compared to metal-poor stars. Kinematic and chemical abundance measurements are now needed for stars in the external regions of ω Cen to better characterize the properties of these sub-populations. Based on observations made with the Dark Energy Camera (DECam) on the 4 m Blanco telescope (NOAO) under programs 2014A-0327, 2015A-0151, 2016A-0189, PIs: A. Calamida, A. Rest, and on observations made with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  20. Legacy Extragalactic UV Survey (LEGUS) With the Hubble Space Telescope. I. Survey Description

    NASA Astrophysics Data System (ADS)

    Calzetti, D.; Lee, J. C.; Sabbi, E.; Adamo, A.; Smith, L. J.; Andrews, J. E.; Ubeda, L.; Bright, S. N.; Thilker, D.; Aloisi, A.; Brown, T. M.; Chandar, R.; Christian, C.; Cignoni, M.; Clayton, G. C.; da Silva, R.; de Mink, S. E.; Dobbs, C.; Elmegreen, B. G.; Elmegreen, D. M.; Evans, A. S.; Fumagalli, M.; Gallagher, J. S., III; Gouliermis, D. A.; Grebel, E. K.; Herrero, A.; Hunter, D. A.; Johnson, K. E.; Kennicutt, R. C.; Kim, H.; Krumholz, M. R.; Lennon, D.; Levay, K.; Martin, C.; Nair, P.; Nota, A.; Östlin, G.; Pellerin, A.; Prieto, J.; Regan, M. W.; Ryon, J. E.; Schaerer, D.; Schiminovich, D.; Tosi, M.; Van Dyk, S. D.; Walterbos, R.; Whitmore, B. C.; Wofford, A.

    2015-02-01

    The Legacy ExtraGalactic UV Survey (LEGUS) is a Cycle 21 Treasury program on the Hubble Space Telescope aimed at the investigation of star formation and its relation with galactic environment in nearby galaxies, from the scales of individual stars to those of ˜kiloparsec-size clustered structures. Five-band imaging from the near-ultraviolet to the I band with the Wide-Field Camera 3 (WFC3), plus parallel optical imaging with the Advanced Camera for Surveys (ACS), is being collected for selected pointings of 50 galaxies within the local 12 Mpc. The filters used for the observations with the WFC3 are F275W(λ2704 Å), F336W(λ3355 Å), F438W(λ4325 Å), F555W(λ5308 Å), and F814W(λ8024 Å) the parallel observations with the ACS use the filters F435W(λ4328 Å), F606W(λ5921 Å), and F814W(λ8057 Å). The multiband images are yielding accurate recent (≲50 Myr) star formation histories from resolved massive stars and the extinction-corrected ages and masses of star clusters and associations. The extensive inventories of massive stars and clustered systems will be used to investigate the spatial and temporal evolution of star formation within galaxies. This will, in turn, inform theories of galaxy evolution and improve the understanding of the physical underpinning of the gas-star formation relation and the nature of star formation at high redshift. This paper describes the survey, its goals and observational strategy, and the initial scientific results. Because LEGUS will provide a reference survey and a foundation for future observations with the James Webb Space Telescope and with ALMA, a large number of data products are planned for delivery to the community. Based on observations obtained with the NASA/ESA Hubble Space Telescope at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA Contract NAS 5-26555.

  1. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  2. Advanced Concurrent-Multiband, Multibeam, Aperture-Synthesis with Intelligent Processing for Urban Operation Sensing

    DTIC Science & Technology

    2012-04-09

    signatures (RSS), in particular, despeckling, superresolution and convergence rate, for a variety of admissible 115 imaging array sensor...attain the superresolution performances in the resulting SSP estimates (3.4), we propose the VA inspired approach [13], [14] to specify the POCS

  3. Remote Sensing Classification of Grass Seed Cropping Practices in Western Oregon

    USDA-ARS?s Scientific Manuscript database

    Multiband Landsat images and multi-temporal MODIS 16-day composite NDVI were classified into 16 categories representing the primary crop rotation options and stand establishment conditions present in western Oregon grass seed fields. Mismatch in resolution between MODIS and Landsat data was resolved...

  4. A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation

    DTIC Science & Technology

    1992-01-30

    A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation

  5. Emirates eXploration Imager (EXI) Overview from the Emirates Mars Mission

    NASA Astrophysics Data System (ADS)

    AlShamsi, Maryam; Wolff, Michael; Khoory, Mohammad; AlMheiri, Suhail; Jones, Andrew; Drake, Ginger; Osterloo, Mikki; Reed, Heather

    2017-04-01

    The Emirates eXploration Imager (EXI) instrument is one of three scientific instruments abroad the Emirate Mars Mission (EMM) spacecraft, "Hope". The planned launch window opens in the summer of 2020, with the goal of this United Arab Emirates (UAE) mission to explore the dynamics of the Martian atmosphere through global spatial sampling which includes both diurnal and seasonal timescales. A particular focus of the mission is the improvement of our understanding of the global circulation in the lower atmosphere and the connections to the upward transport of energy of the escaping atmospheric particles from the upper atmosphere. This will be accomplished using three unique and complementary scientific instruments. The subject of this presentation, EXI, is a multi-band camera capable of taking 12 megapixel images, which translates to a spatial resolution of better than 8 km with a well calibrated radiometric performance. EXI uses a selector wheel mechanism consisting of 6 discrete bandpass filters to sample the optical spectral region: 3 UV bands and 3 visible (RGB) bands. Atmospheric characterization will involve the retrieval of the ice optical depth using the 300-340 nm band, the dust optical depth in the 205-235nm range, and the column abundance of ozone with a band covering 245-275 nm. Radiometric fidelity is optimized while simplifying the optical design by separating the UV and VIS optical paths. The instrument is being developed jointly by the Laboratory for Atmospheric and Space Physics (LASP), University of California, Boulder, USA, and Mohammed Bin Rashid Space Centre (MBRSC), Dubai, UAE.

  6. GLACiAR: GaLAxy survey Completeness AlgoRithm

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniela; Trenti, Michele; Mutch, Simon; Oesch, Pascal

    2018-05-01

    GLACiAR (GaLAxy survey Completeness AlgoRithm) estimates the completeness and selection functions in galaxy surveys. Tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, the code can nevertheless be applied broadly. GLACiAR generates artificial galaxies that follow Sérsic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate.

  7. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  8. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  9. The Role of Thermodynamic Processes in the Evolution of Single and Multi-banding within Winter Storms

    NASA Astrophysics Data System (ADS)

    Ganetis, Sara Anne

    Mesoscale precipitation bands within Northeast U.S. (NEUS) winter storms result in heterogeneous spatial and temporal snowfall. Several studies have provided analysis of snowbands focusing on larger, meso-beta scale bands with lengths (L) > 200 km known as single bands. NEUS winter storms can also exhibit multiple bands with meso-beta scale (L < 200 km) and similar spatial orientation and when ≥ 3 occur are termed multi-bands; however, the genesis and evolution of multi-bands is less well understood. Unlike single bands, there is no multi-bands climatological study. In addition, there has been little detailed thermodynamic analysis of snowbands. This dissertation utilizes radar observations, reanalyses, and high-resolution model simulations to explore the thermodynamic evolution of single and multi-bands. Bands are identified within 20 cool season (October-April) NEUS storms. The 110-case dataset was classified using a combination of automated and manual methods into: single band only (SINGLE), multi-bands only (MULTI), both single and multi-bands (BOTH), and non-banded (NONE). Multi-bands occur with the presence of a single band in 55.4% of times used in this study, without the presence of a single band 18.1% of the time, and precipitation exhibits no banded characteristics 23.8% of the time. Most MULTI events occur in the northeast quadrant of a developing cyclone poleward of weak-midlevel forcing along a warm front, whereas multi-bands associated with BOTH events mostly occur in the northwest quadrant of mature cyclones associated with strong mid-level frontogenesis and conditional symmetric instability. The non-banded precipitation associated with NONE events occur in the eastern quadrants of developing and mature cyclones lacking mid-level forcing to concentrate the precipitation into bands. A high-resolution mesoscale model is used to explore the evolution of single and multi-bands based on two case studies, one of a single band and one of multi-bands. The multi-bands form in response to intermittent mid-level frontogenetical forcing in a conditionally unstable environment. The bands within their genesis location southeast of the single band move northwest towards the single band by 700-hPa steering flow. This allows for the formation of new multi-bands within the genesis region, unlike the single band that remains fixed to a 700-hPa frontogenesis maximum. Latent heating within the band is shown to increase the intensity and duration of single and multi-bands through decreased geopotential height below the heating maximum that leads to increased convergence within the band.

  10. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  11. Analysis of remote sensing data for evaluation of vegetation resources

    NASA Technical Reports Server (NTRS)

    1970-01-01

    Research has centered around: (1) completion of a study on the use of remote sensing techniques as an aid to multiple use management; (2) determination of the information transfer at various image resolution levels for wildland areas; and (3) determination of the value of small scale multiband, multidate photography for the analysis of vegetation resources. In addition, a substantial effort was made to upgrade the automatic image classification and spectral signature acquisition capabilities of the laboratory. It was found that: (1) Remote sensing techniques should be useful in multiple use management to provide a first-cut analysis of an area. (2) Imagery with 400-500 feet ground resolvable distance (GRD), such as that expected from ERTS-1, should allow discriminations to be made between woody vegetation, grassland, and water bodies with approximately 80% accuracy. (3) Barley and wheat acreages in Maricopa County, Arizona could be estimated with acceptable accuracies using small scale multiband, multidate photography. Sampling errors for acreages of wheat, barley, small grains (wheat and barley combined), and all cropland were 13%, 11%, 8% and 3% respectively.

  12. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  13. Radar data processing and analysis

    NASA Technical Reports Server (NTRS)

    Ausherman, D.; Larson, R.; Liskow, C.

    1976-01-01

    Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.

  14. Excitation-resolved cone-beam x-ray luminescence tomography.

    PubMed

    Liu, Xin; Liao, Qimei; Wang, Hongkai; Yan, Zhuangzhi

    2015-07-01

    Cone-beam x-ray luminescence computed tomography (CB-XLCT), as an emerging imaging technique, plays an important role in in vivo small animal imaging studies. However, CB-XLCT suffers from low-spatial resolution due to the ill-posed nature of reconstruction. We improve the imaging performance of CB-XLCT by using a multiband excitation-resolved imaging scheme combined with principal component analysis. To evaluate the performance of the proposed method, the physical phantom experiment is performed with a custom-made XLCT/XCT imaging system. The experimental results validate the feasibility of the method, where two adjacent nanophosphors (with an edge-to-edge distance of 2.4 mm) can be located.

  15. Mineralogical Mapping of Asteroid Itokawa using Calibrated Hayabusa AMICA images and NIRS Spectrometer Data

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Becker, Kris J.; Reddy, Vishnu; Li, Jian-Yang; Bhatt, Megha

    2016-10-01

    The goal of our work is to restore data from the Hayabusa spacecraft that is available in the Planetary Data System (PDS) Small Bodies Node. More specifically, our objectives are to radiometrically calibrate and photometrically correct AMICA (Asteroid Multi-Band Imaging Camera) images of Itokawa. The existing images archived in the PDS are not in reflectance and not corrected from the effect of viewing geometry. AMICA images are processed with the Integrated Software for Imagers and Spectrometers (ISIS) system from USGS, widely used for planetary image analysis. The processing consists in the ingestion of the images in ISIS (amica2isis), updates to AMICA start time (sumspice), radiometric calibration (amicacal) including smear correction, applying SPICE ephemeris, adjusting control using Gaskell SUMFILEs (sumspice), projecting individual images (cam2map) and creating global or local mosaics. The application amicacal has also an option to remove pixels corresponding to the polarizing filters on the left side of the image frame. The amicacal application will include a correction for the Point Spread Function. The last version of the PSF published by Ishiguro et al. in 2014 includes correction for the effect of scattered light. This effect is important to correct because it can add 10% level in error and is affecting mostly the longer wavelength filters such as zs and p. The Hayabusa team decided to use the color data for six of the filters for scientific analysis after correcting for the scattered light. We will present calibrated data in I/F for all seven AMICA color filters. All newly implemented ISIS applications and map projections from this work have been or will be distributed to the community via ISIS public releases. We also processed the NIRS spectrometer data, and we will perform photometric modeling, then apply photometric corrections, and finally extract mineralogical parameters. The end results will be the creation of pyroxene chemistry and olivine/pyroxene ratio maps of Itokawa using NIRS and AMICA map products. All the products from this work will be archived on the PDS website. This work was supported by NASA Planetary Missions Data Analysis Program grant NNX13AP27G.

  16. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  17. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  18. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  19. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  20. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballering, Nicholas P.; Su, Kate Y. L.; Rieke, George H.

    We investigate whether varying the dust composition (described by the optical constants) can solve a persistent problem in debris disk modeling—the inability to fit the thermal emission without overpredicting the scattered light. We model five images of the β Pictoris disk: two in scattered light from the Hubble Space Telescope ( HST )/Space Telescope Imaging Spectrograph at 0.58 μ m and HST /Wide Field Camera 3 (WFC 3) at 1.16 μ m, and three in thermal emission from Spitzer /Multiband Imaging Photometer for Spitzer (MIPS) at 24 μ m, Herschel /PACS at 70 μ m, and Atacama Large Millimeter/submillimeter Arraymore » at 870 μ m. The WFC3 and MIPS data are published here for the first time. We focus our modeling on the outer part of this disk, consisting of a parent body ring and a halo of small grains. First, we confirm that a model using astronomical silicates cannot simultaneously fit the thermal and scattered light data. Next, we use a simple generic function for the optical constants to show that varying the dust composition can improve the fit substantially. Finally, we model the dust as a mixture of the most plausible debris constituents: astronomical silicates, water ice, organic refractory material, and vacuum. We achieve a good fit to all data sets with grains composed predominantly of silicates and organics, while ice and vacuum are, at most, present in small amounts. This composition is similar to one derived from previous work on the HR 4796A disk. Our model also fits the thermal spectral energy distribution, scattered light colors, and high-resolution mid-IR data from T-ReCS for this disk. Additionally, we show that sub-blowout grains are a necessary component of the halo.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchi, Luciana; Efremova, Boryana; Hodge, Paul

    We present a comprehensive study of young stellar populations in six dwarf galaxies in or near the Local Group: Phoenix, Pegasus, Sextans A, Sextans B, WLM, and NGC 6822. Their star-forming regions, selected from GALEX wide-field far-UV imaging, were imaged (at sub-pc resolution) with the WFPC2 camera on board the Hubble Space Telescope (HST) in six bandpasses from far-UV to I to detect and characterize their hot massive star content. This study is part of HST treasury survey program HST-GO-11079; the general data characteristics and reduction procedures are detailed in this paper and results are presented for the first sixmore » galaxies. From a total of 180 HST images, we provide catalogs of the multi-band stellar photometry and derive the physical parameters of massive stars by analyzing it with model-atmosphere colors. We use the results to infer ages, number of massive stars, extinction, and spatial characteristics of the young stellar populations. The hot massive star content varies largely across our galaxy sample, from an inconspicuous presence in Phoenix and Pegasus to the highest relative abundance of young massive stars in Sextans A and WLM. Albeit to a largely varying extent, most galaxies show a very young population (a few Myrs, except for Phoenix), and older ones (a few 10{sup 7} years in Sextans A, Sextans B, NGC 6822, and WLM, {approx}10{sup 8}yr in Phoenix and Pegasus), suggesting discrete bursts of recent star formation in the mapped regions. The hot massive star content (indicative of the young populations) broadly correlates with the total galaxy stellar mass represented by the integrated optical magnitude, although it varies by a factor of {approx}3 between Sextans A, WLM, and Sextans B, which have similar M{sub V}. Extinction properties are also derived.« less

  3. Magnetic Resonance Imaging Measurement of Transmission of Arterial Pulsation to the Brain on Propranolol Versus Amlodipine.

    PubMed

    Webb, Alastair J S; Rothwell, Peter M

    2016-06-01

    Cerebral arterial pulsatility is associated with leukoaraiosis and depends on central arterial pulsatility and arterial stiffness. The effect of antihypertensive drugs on transmission of central arterial pulsatility to the cerebral circulation is unknown, partly because of limited methods of assessment. In a technique-development pilot study, 10 healthy volunteers were randomized to crossover treatment with amlodipine and propranolol. At baseline and on each drug, we assessed aortic (Sphygmocor) and middle cerebral artery pulsatility (TCDtranscranial ultrasound). We also performed whole-brain, 3-tesla multiband blood-oxygen level dependent magnetic resonance imaging (multiband factor 6, repetition time=0.43s), concurrent with a novel method of continuous noninvasive blood pressure monitoring. Drug effects on relationships between cardiac cycle variation in blood pressure and blood-oxygen level dependent imaging were determined (fMRI Expert Analysis Tool, fMRIB Software Library [FEAT-FSL]). Aortic pulsatility was similar on amlodipine (27.3 mm Hg) and propranolol (27.9 mm Hg, P diff=0.33), while MCA pulsatility increased nonsignificantly more from baseline on propranolol (+6%; P=0.09) than amlodipine (+1.5%; P=0.58). On magnetic resonance imaging, cardiac frequency blood pressure variations were found to be significantly more strongly associated with blood-oxygen level dependent imaging on propranolol than amlodipine. We piloted a novel method of assessment of arterial pulsatility with concurrent high-frequency blood-oxygen level dependent magnetic resonance imaging and noninvasive blood pressure monitoring. This method was able to identify greater transmission of aortic pulsation on propranolol than amlodipine, which warrants further investigation. © 2016 American Heart Association, Inc.

  4. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  5. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  6. MULTIBAND OPTICAL OBSERVATION OF THE P/2010 A2 DUST TAIL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junhan; Ishiguro, Masateru; Hanayama, Hidekazu

    2012-02-10

    An inner main-belt asteroid, P/2010 A2, was discovered on 2010 January 6. Based on its orbital elements, it is considered that the asteroid belongs to the Flora collisional family, where S-type asteroids are common, while showing a comet-like dust tail. Although analysis of images taken by the Hubble Space Telescope and Rosetta spacecraft suggested that the dust tail resulted from a recent head-on collision between asteroids, an alternative idea of ice sublimation was suggested based on the morphological fitting of ground-based images. Here, we report a multiband observation of P/2010 A2 made on 2010 January with a 105 cm telescopemore » at the Ishigakijima Astronomical Observatory. Three broadband filters, g', R{sub c} , and I{sub c} , were employed for the observation. The unique multiband data reveal that the reflectance spectrum of the P/2010 A2 dust tail resembles that of an Sq-type asteroid or that of ordinary chondrites rather than that of an S-type asteroid. Due to the large error of the measurement, the reflectance spectrum also resembles the spectra of C-type asteroids, even though C-type asteroids are uncommon in the Flora family. The reflectances relative to the g' band (470 nm) are 1.096 {+-} 0.046 at the R{sub c} band (650 nm) and 1.131 {+-} 0.061 at the I{sub c} band (800 nm). We hypothesize that the parent body of P/2010 A2 was originally S-type but was then shattered upon collision into scattering fresh chondritic particles from the interior, thus forming the dust tail.« less

  7. Target Characterization and Follow-Up Observations in Support of the Kepler Mission

    NASA Technical Reports Server (NTRS)

    Latham, David W.

    2003-01-01

    A variety of experiments were carried out to investigate the number and characteristics of the stars to be included in the Kepler Input Catalog. One result of this work was the proposal that the 2MASS Catalog of astrometry and photometry in the infrared be used as the primary source for the initial selection of candidate target stars, because this would naturally decrease the number of unsuitable hot blue stars and would also increase the number of desirable solar-type dwarf stars. Another advantage of the 2MASS catalogue is that the stellar positions have more than adequate astrometric accuracy for the Kepler target selection. The original plan reported in the Concept Study Report was to use the parallaxes and multi-band photometry from the FAME mission to provide the information needed for reliable separation of giants and dwarfs. As a result of NASA's withdrawal of support for FAME an alternate approach was needed. In November 2002 we proposed to the Kepler Science Team that a ground-based multi-band photometric survey could help alleviate the loss of the FAME data. The Science Team supported this proposal strongly, and we undertook a survey of possible facilities for such a survey. We concluded that the SAO's 4Shooter CCD camera on the 1.2-m telescope at the Whipple Observatory on Mount Hopkins, Arizona, showed promise for this work.

  8. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  9. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  10. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  11. ASTER First Views of Red Sea, Ethiopia - Thermal-Infrared TIR Image monochrome

    NASA Image and Video Library

    2000-03-11

    ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02452

  12. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  13. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  14. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  16. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  17. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  18. ASTER First Views of Rift Valley, Ethiopia - Thermal-Infrared TIR Image color

    NASA Image and Video Library

    2000-03-11

    This image is a color composite covering the Rift Valley inland area of Ethiopia (south of the region shown in PIA02452). The color difference of this image reflects the distribution of different rocks with different amounts of silicon dioxide. It is inferred that the area with whitish color is covered with basalt and the pinkish area in the center contain sandesite. This is the first spaceborne, multi-band TIR image in history that enables geologists to distinguish between rocks with similar compositions. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02453

  19. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  20. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  1. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  2. VizieR Online Data Catalog: KiDS-ESO-DR2 multi-band source catalog (de Jong+, 2015)

    NASA Astrophysics Data System (ADS)

    de Jong, J. T. A.; Verdoes Kleijn, G. A.; Boxhoorn, D. R.; Buddelmeijer, H.; Capaccioli, M.; Getman, F.; Grado, A.; Helmich, E.; Huang, Z.; Irisarri, N.; Kuijken, K.; La Barbera, F.; McFarland, J. P.; Napolitano, N. R.; Radovich, M.; Sikkema, G.; Valentijn, E. A.; Begeman, K. G.; Brescia, M.; Cavuoti, S.; Choi, A.; Cordes, O.-M.; Covone, G.; Dall'Ora, M.; Hildebrandt, H.; Longo, G.; Nakajima, R.; Paolillo, M.; Puddu, E.; Rifatto, A.; Tortora, C.; van Uitert, E.; Buddendiek, A.; Harnois-Deraps, J.; Erben, T.; Eriksen, M. B.; Heymans, C.; Hoekstra, H.; Joachimi, B.; Kitching, T. D.; Klaes, D.; Koopmans, L. V. E.; Koehlinger, F.; Roy, N.; Sifon, C.; Schneider, P.; Sutherland, W. J.; Viola, M.; Vriend, W.-J.

    2016-10-01

    KiDS data releases consist of ~1 square degree tiles that have been successfully observed in all four survey filters (u,g,r,i). The second data release (KiDS-ESO-DR2) was available in February 2015 and contains imaging data, masks and single-band source lists for all tiles observed in all four filters for which observations were completed during the second year of regular operations (1 October 2012 to 31 September 2013), a total of 98 tiles. Apart from the data products mentioned above, KiDS-ESO-DR2 also provides a multi-band source catalogue based on the combined set of 148 tiles released in the first two data releases. A complete list of all tiles with data quality parameters can be found on the KiDS website: http://kids.strw.leidenuniv.nl/DR2/ (1 data file).

  3. Navigation and Remote Sensing Payloads and Methods of the Sarvant Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Molina, P.; Fortuny, P.; Colomina, I.; Remy, M.; Macedo, K. A. C.; Zúnigo, Y. R. C.; Vaz, E.; Luebeck, D.; Moreira, J.; Blázquez, M.

    2013-08-01

    In a large number of scenarios and missions, the technical, operational and economical advantages of UAS-based photogrammetry and remote sensing over traditional airborne and satellite platforms are apparent. Airborne Synthetic Aperture Radar (SAR) or combined optical/SAR operation in remote areas might be a case of a typical "dull, dirty, dangerous" mission suitable for unmanned operation - in harsh environments such as for example rain forest areas in Brazil, topographic mapping of small to medium sparsely inhabited remote areas with UAS-based photogrammetry and remote sensing seems to be a reasonable paradigm. An example of such a system is the SARVANT platform, a fixed-wing aerial vehicle with a six-meter wingspan and a maximumtake- of-weight of 140 kilograms, able to carry a fifty-kilogram payload. SARVANT includes a multi-band (X and P) interferometric SAR payload, as the P-band enables the topographic mapping of densely tree-covered areas, providing terrain profile information. Moreover, the combination of X- and P-band measurements can be used to extract biomass estimations. Finally, long-term plan entails to incorporate surveying capabilities also at optical bands and deliver real-time imagery to a control station. This paper focuses on the remote-sensing concept in SARVANT, composed by the aforementioned SAR sensor and envisioning a double optical camera configuration to cover the visible and the near-infrared spectrum. The flexibility on the optical payload election, ranging from professional, medium-format cameras to mass-market, small-format cameras, is discussed as a driver in the SARVANT development. The paper also focuses on the navigation and orientation payloads, including the sensors (IMU and GNSS), the measurement acquisition system and the proposed navigation and orientation methods. The latter includes the Fast AT procedure, which performs close to traditional Integrated Sensor Orientation (ISO) and better than Direct Sensor Orientation (DiSO), and features the advantage of not requiring the massive image processing load for the generation of tie points, although it does require some Ground Control Points (GCPs). This technique is further supported by the availability of a high quality INS/GNSS trajectory, motivated by single-pass and repeat-pass SAR interferometry requirements.

  4. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  5. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  6. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  7. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  8. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  9. Lander and rover exploration on the lunar surface: A study for SELENE-B mission

    NASA Astrophysics Data System (ADS)

    Selene-B Rover Science Group; Sasaki, S.; Sugihara, T.; Saiki, K.; Akiyama, H.; Ohtake, M.; Takeda, H.; Hasebe, N.; Kobayashi, M.; Haruyama, J.; Shirai, K.; Kato, M.; Kubota, T.; Kunii, Y.; Kuroda, Y.

    The SELENE-B, a lunar landing mission, has been studied in Japan, where a scientific investigation plan is proposed using a robotic rover and a static lander. The main theme to be investigated is to clarify the lunar origin and evolution, especially for early crustal formation process probably from the ancient magma ocean. The highest priority is placed on a direct in situ geology at a crater central peak, “a window to the interior”, where subcrustal materials are exposed and directly accessed without drilling. As a preliminary study was introduced by Sasaki et al. [Sasaki, S., Kubota, T., Okada, T. et al. Scientific exploration of lunar surface using a rover in Japanse future lunar mission. Adv. Space Res. 30, 1921 1926, 2002.], the rover and lander are jointly used, where detailed analyses of the samples collected by the rover are conducted at the lander. Primary scientific instruments are a multi-band stereo imager, a gamma-ray spectrometer, and a sampling tool on the rover, and a multi-spectral telescopic imager, a sampling system, and a sample analysis package with an X-ray spectrometer/diffractometer, a multi-band microscope as well as a sample cleaning and grinding device on the lander.

  10. The design of common aperture and multi-band optical system based on day light telescope

    NASA Astrophysics Data System (ADS)

    Chen, Jiao; Wang, Ling; Zhang, Bo; Teng, Guoqi; Wang, Meng

    2017-02-01

    As the development of electro-optical weapon system, the technique of common path and multi-sensor are used popular, and becoming a trend. According to the requirement of miniaturization and lightweight for electro-optical stabilized sighting system, a day light telescope/television viewing-aim system/ laser ranger has been designed in this thesis, which has common aperture. Thus integration scheme of multi-band and common aperture has been adopted. A day light telescope has been presented, which magnification is 8, field of view is 6°, and distance of exit pupil is more than 20mm. For 1/3" CCD, television viewing-aim system which has 156mm focal length, has been completed. In addition, laser ranging system has been designed, with 10km raging distance. This paper outlines its principle which used day light telescope as optical reference of correcting the optical axis. Besides, by means of shared objective, reserved image with inverting prism and coating beam-splitting film on the inclined plane of the cube prism, the system has been applied to electro-optical weapon system, with high-resolution of imaging and high-precision ranging.

  11. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  12. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  13. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  14. Behavioral modeling and digital compensation of nonlinearity in DFB lasers for multi-band directly modulated radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Li, Jianqiang; Yin, Chunjing; Chen, Hao; Yin, Feifei; Dai, Yitang; Xu, Kun

    2014-11-01

    The envisioned C-RAN concept in wireless communication sector replies on distributed antenna systems (DAS) which consist of a central unit (CU), multiple remote antenna units (RAUs) and the fronthaul links between them. As the legacy and emerging wireless communication standards will coexist for a long time, the fronthaul links are preferred to carry multi-band multi-standard wireless signals. Directly-modulated radio-over-fiber (ROF) links can serve as a lowcost option to make fronthaul connections conveying multi-band wireless signals. However, directly-modulated radioover- fiber (ROF) systems often suffer from inherent nonlinearities from directly-modulated lasers. Unlike ROF systems working at the single-band mode, the modulation nonlinearities in multi-band ROF systems can result in both in-band and cross-band nonlinear distortions. In order to address this issue, we have recently investigated the multi-band nonlinear behavior of directly-modulated DFB lasers based on multi-dimensional memory polynomial model. Based on this model, an efficient multi-dimensional baseband digital predistortion technique was developed and experimentally demonstrated for linearization of multi-band directly-modulated ROF systems.

  15. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  16. Advancing RF pulse design using an open-competition format: Report from the 2015 ISMRM challenge.

    PubMed

    Grissom, William A; Setsompop, Kawin; Hurley, Samuel A; Tsao, Jeffrey; Velikina, Julia V; Samsonov, Alexey A

    2017-10-01

    To advance the best solutions to two important RF pulse design problems with an open head-to-head competition. Two sub-challenges were formulated in which contestants competed to design the shortest simultaneous multislice (SMS) refocusing pulses and slice-selective parallel transmission (pTx) excitation pulses, subject to realistic hardware and safety constraints. Short refocusing pulses are needed for spin echo SMS imaging at high multiband factors, and short slice-selective pTx pulses are needed for multislice imaging in ultra-high field MRI. Each sub-challenge comprised two phases, in which the first phase posed problems with a low barrier of entry, and the second phase encouraged solutions that performed well in general. The Challenge ran from October 2015 to May 2016. The pTx Challenge winners developed a spokes pulse design method that combined variable-rate selective excitation with an efficient method to enforce SAR constraints, which achieved 10.6 times shorter pulse durations than conventional approaches. The SMS Challenge winners developed a time-optimal control multiband pulse design algorithm that achieved 5.1 times shorter pulse durations than conventional approaches. The Challenge led to rapid step improvements in solutions to significant problems in RF excitation for SMS imaging and ultra-high field MRI. Magn Reson Med 78:1352-1361, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  18. You are here: Earth as seen from Mars

    NASA Image and Video Library

    2004-03-11

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. The inset shows a combination of four panoramic camera images zoomed in on Earth. The arrow points to Earth. Earth was too faint to be detected in images taken with the panoramic camera's color filters. http://photojournal.jpl.nasa.gov/catalog/PIA05547

  19. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  20. Mars Descent Imager for Curiosity

    NASA Image and Video Library

    2010-07-19

    A pocketknife provides scale for this image of the Mars Descent Imager camera; the camera will fly on the Curiosity rover of NASA Mars Science Laboratory mission. Malin Space Science Systems, San Diego, Calif., supplied the camera for the mission.

  1. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  2. Solar System Observing with the Space Infrared Telescope Facility (SIRTF)

    NASA Technical Reports Server (NTRS)

    Cleve, J. Van; Meadows, V. S.; Stansberry, J.

    2003-01-01

    SIRTF is NASA's Space Infrared Telescope Facility. Currently planned for launch on 15 Apr 2003, it is the final element in NASA's Great Observatories Program. SIRTF has an 85 cm diameter f/12 lightweight beryllium telescope, cooled to lekss than 5.5K. It is diffraction-limited at 6.5 microns, and has wavelengthcoverage from 3-180 microns. Its estimated lifetime (limited by cryogen) is 2.5 years at minimum, with a goal of 5+ years. SIRTF has three instruments, IRAC, IRS, and MIPS. IRAC (InfraRed Array Camera) provides simultaneous images at wavelengths of 3.6, 4.5, 5.8, and 8.0 microns. IRS (InfraRed Spectrograph) has 4 modules providing low-resolution (R=60-120) spectra from 5.3 to 40 microns, high-resolution (R=600) spectra from 10 to 37 microns, and an autonomous target acquisition system (PeakUp) which includes small-field imaging at 15 microns. MIPS (Multiband Imaging Photometer for SIRTF)} does imaging photometry at 24, 70, and 160 m and low-resolution (R=15-25) spectroscopy (SED) between 55 and 96 microns. The SIRTF Guaranteed Time Observers (GTOs) are planning to observe Outer Solar System satellites and planets, extinct comets and low-albedo asteroids, Centaurs and Kuiper Belt Objects, cometary dust trails, and a few active short-period comets. The GTO programs are listed in detail in the SIRTF Reserved Observations Catalog (ROC). We would like to emphasize that there remain many interesting subjects for the General Observers (GO). Proposal success for the planetary observer community in the first SIRTF GO proposal cycle (GO-1) determines expectations for future GO calls and Solar System use of SIRTF, so we would like promote a strong set of planetary GO-1 proposals. Towards that end, we present this poster, and we will convene a Solar System GO workshop 3.5 months after launch.

  3. M33: A Close Neighbor Reveals its True Size and Splendor (3-color composite)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    One of our closest galactic neighbors shows its awesome beauty in this new image from NASA's Spitzer Space Telescope. M33, also known as the Triangulum Galaxy, is a member of what's known as our Local Group of galaxies. Along with our own Milky Way, this group travels together in the universe, as they are gravitationally bound. In fact, M33 is one of the few galaxies that is moving toward the Milky Way despite the fact that space itself is expanding, causing most galaxies in the universe to grow farther and farther apart.

    When viewed with Spitzer's infrared eyes, this elegant spiral galaxy sparkles with color and detail. Stars appear as glistening blue gems (several of which are actually foreground stars in our own galaxy), while dust rich in organic molecules glows green. The diffuse orange-red glowing areas indicate star-forming regions, while small red flecks outside the spiral disk of M33 are most likely distant background galaxies. But not only is this new image beautiful, it also shows M33 to be surprising large bigger than its visible-light appearance would suggest. With its ability to detect cold, dark dust, Spitzer can see emission from cooler material well beyond the visible range of M33's disk. Exactly how this cold material moved outward from the galaxy is still a mystery, but winds from giant stars or supernovas may be responsible.

    M33 is located about 2.9 million light-years away in the constellation Triangulum. This is a three-color composite image showing infrared observations from two of Spitzer instruments. Blue represents combined 3.6- and 4.5-micron light and green shows light of 8 microns, both captured by Spitzer's infrared array camera. Red is 24-micron light detected by Spitzer's multiband imaging photometer.

  4. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  5. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  6. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  7. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  8. An evaluation of multiband photography for rock discrimination. [sedimentary rocks of Front Range, Colorado

    NASA Technical Reports Server (NTRS)

    Lee, K. (Principal Investigator); Raines, G. L.

    1974-01-01

    The author has identified the following significant results. With the advent of ERTS and Skylab satellites, multiband imagery and photography have become readily available to geologists. The ability of multiband photography to discriminate sedimentary rocks was examined. More than 8600 in situ measurements of band reflectance of the sedimentary rocks of the Front Range, Colorado, were acquired. Statistical analysis of these measurements showed that: (1) measurements from one site can be used at another site 100 miles away; (2) there is basically only one spectral reflectance curve for these rocks, with constant amplitude differences between the curves; and (3) the natural variation is so large that at least 150 measurements per formation are required to select best filters. These conclusions are supported by subjective tests with aerial multiband photography. The designed multiband photography concept for rock discrimination is not a practical method of improving sedimentary rock discrimination capabilities.

  9. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  10. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  11. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  12. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  13. VizieR Online Data Catalog: Merging galaxies with tidal tails in COSMOS to z=1 (Wen+, 2016)

    NASA Astrophysics Data System (ADS)

    Wen, Z. Z.; Zheng, X. Z.

    2017-02-01

    Our study utilizes the public data and catalogs from multi-band deep surveys of the COSMOS field. The UltraVISTA survey (McCracken+ 2012, J/A+A/544/A156) provides ultra-deep near-IR imaging observations of this field in the Y,J,H, and Ks-band, as well as a narrow band (NB118). The HST/ACS I-band imaging data are publicly available, allowing us to measure morphologies in the rest-frame optical for galaxies at z<=1. The HST/ACS I-band images reach a 5σ depth of 27.2 magnitude for point sources. (1 data file).

  14. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  15. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  16. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  17. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  18. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  19. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  20. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  1. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  2. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  3. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  4. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  5. High-Resolution Mars Camera Test Image of Moon Infrared

    NASA Image and Video Library

    2005-09-13

    This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.

  6. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  7. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  8. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  9. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  10. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  11. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  12. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  13. Probing the End of the IMF in NGC 2024 with NIRCam on JWST: Assessing the Impact of Nebular Emission in Galactic Star Forming Regions

    NASA Astrophysics Data System (ADS)

    Suri, Veenu; Meyer, Michael; Greenbaum, Alexandra Z.; Bell, Cameron; Beichman, Charles; Gordon, Karl D.; Greene, Thomas P.; Hodapp, K.; Horner, Scott; Johnstone, Doug; Leisenring, Jarron; Manara, Carlos; Mann, Rita; Misselt, K.; Raileanu, Roberta; Rieke, Marcia; Roellig, Thomas

    2018-01-01

    We describe observations of the embedded young cluster associated with the HII region NGC 2024 planned as part of the guaranteed time observing program for the James Webb Space Telescope with the NIRCam (Near Infrared Camera) instrument. Our goal is to obtain a census of the cluster down to 2 Jupiter masses, viewed through 10-20 magnitudes of extinction, using multi-band filter photometry, both broadband filters and intermediate band filters that are expected to be sensitive to temperature and surface gravity. The cluster contains several bright point sources as well as extended emission due to reflected light, thermal emission from warm dust, as well as nebular line emission. We first developed techniques to better understand which point sources would saturate in our target fields when viewed through several JWST NIRCam filters. Using images of the field with the WISE satellite in filters W1 and W2, as well as 2MASS (J and H) bands, we devised an algorithm that takes the K-band magnitudes of point sources in the field, and the known saturation limits of several NIRCam filters to estimate the impact of the extended emission on survey sensitivity. We provide an overview of our anticipated results, detecting the low mass end of the IMF as well as planetary mass objects likely liberated through dynamical interactions.

  14. SIRTF Tools for DIRT

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.

    2004-07-01

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS {http://dustem.astro.umd.edu}) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF. The models are based on the dust radiation transfer code of Wolfire & Cassinelli (1986) which accounts for multiple grain sizes and compositions. The model outputs are averaged over the instrument bands using the same weighting (νFν = constant) as the SIRTF data pipeline which allows the SIRTF data products to be compared directly with the model database. This work was supported in part by a NASA AISRP grant NAG 5-10751 and the SIRTF Legacy Science Program provided by NASA through an award issued by JPL under NASA contract 1407.

  15. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  16. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  17. The effect of microchannel plate gain depression on PAPA photon counting cameras

    NASA Astrophysics Data System (ADS)

    Sams, Bruce J., III

    1991-03-01

    PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.

  18. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  19. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  20. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  1. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  2. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).

  3. Imaging of breast cancer with mid- and long-wave infrared camera.

    PubMed

    Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R

    2008-01-01

    In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.

  4. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  5. Webcam network and image database for studies of phenological changes of vegetation and snow cover in Finland, image time series from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali

    2018-01-01

    In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.

  6. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  7. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  8. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  9. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  10. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  11. Multi-Wavelength Views of Protostars in IC 1396

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Click on individual images below for larger view

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site]

    NASA's Spitzer Space Telescope has captured a glowing stellar nursery within a dark globule that is opaque at visible light. These new images pierce through the obscuration to reveal the birth of new protostars, or embryonic stars, and young stars never before seen.

    The Elephant's Trunk Nebula is an elongated dark globule within the emission nebula IC 1396 in the constellation of Cepheus. Located at a distance of 2,450 light-years, the globule is a condensation of dense gas that is barely surviving the strong ionizing radiation from a nearby massive star. The globule is being compressed by the surrounding ionized gas.

    The large composite image above is a product of combining data from the observatory's multiband imaging photometer and the infrared array camera. The thermal emission at 24 microns measured by the photometer (red) is combined with near-infrared emission from the camera at 3.6/4.5 microns (blue) and from 5.8/8.0 microns (green). The colors of the diffuse emission and filaments vary, and are a combination of molecular hydrogen (which tends to be green) and polycyclic aromatic hydrocarbon (brown) emissions.

    Within the globule, a half dozen newly discovered protostars, or embryonic stars, are easily discernible as the bright red-tinted objects, mostly along the southern rim of the globule. These were previously undetected at visible wavelengths due to obscuration by the thick cloud ('globule body') and by dust surrounding the newly forming stars. The newborn stars form in the dense gas because of compression by the wind and radiation from a nearby massive star (located outside the field of view to the left). The winds from this unseen star are also responsible for producing the spectacular filamentary appearance of the globule itself.

    The Spitzer Space Telescope also sees many newly discovered young stars, often enshrouded in dust, which may be starting the nuclear fusion that defines a star. These young stars are too cool to be seen at visible wavelengths. Both the protostars and young stars are bright in the mid-infrared because of their surrounding discs of solid material. A few of the visible-light stars in this image were found to have excess infrared emission, suggesting they are more mature stars surrounded by primordial remnants from their formation, or from crumbling asteroids and comets in their planetary systems.

  12. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  13. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  14. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  15. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  16. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  17. Multi-band transmission color filters for multi-color white LEDs based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Qixia; Zhu, Zhendong; Gu, Huarong; Chen, Mengzhu; Tan, Qiaofeng

    2017-11-01

    Light-emitting diodes (LEDs) based visible light communication (VLC) can provide license-free bands, high data rates, and high security levels, which is a promising technique that will be extensively applied in future. Multi-band transmission color filters with enough peak transmittance and suitable bandwidth play a pivotal role for boosting signal-noise-ratio in VLC systems. In this paper, multi-band transmission color filters with bandwidth of dozens nanometers are designed by a simple analytical method. Experiment results of one-dimensional (1D) and two-dimensional (2D) tri-band color filters demonstrate the effectiveness of the multi-band transmission color filters and the corresponding analytical method.

  18. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  19. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    PubMed

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  20. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  1. Electron-boson spectral density function of correlated multiband systems obtained from optical data: Ba0.6K0.4Fe2As2 and LiFeAs.

    PubMed

    Hwang, Jungseek

    2016-03-31

    We introduce an approximate method which can be used to simulate the optical conductivity data of correlated multiband systems for normal and superconducting cases by taking advantage of a reversed process in comparison to a usual optical data analysis, which has been used to extract the electron-boson spectral density function from measured optical spectra of single-band systems, like cuprates. We applied this method to optical conductivity data of two multiband pnictide systems (Ba0.6K0.4Fe2As2 and LiFeAs) and obtained the electron-boson spectral density functions. The obtained electron-boson spectral density consists of a sharp mode and a broad background. The obtained spectral density functions of the multiband systems show similar properties as those of cuprates in several aspects. We expect that our method helps to reveal the nature of strong correlations in the multiband pnictide superconductors.

  2. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  3. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  4. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  5. A low-cost dual-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  6. Left Panorama of Spirit's Landing Site

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Left Panorama of Spirit's Landing Site

    This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.

  7. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  8. WE-G-18C-07: Accelerated Water/fat Separation in MRI for Radiotherapy Planning Using Multi-Band Imaging Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, S; Stemkens, B; Sbrizzi, A

    Purpose: Dixon sequences are used to characterize disease processes, obtain good fat or water separation in cases where fat suppression fails and to obtain pseudo-CT datasets. Dixon's method uses at least two images acquired with different echo times and thus requires prolonged acquisition times. To overcome associated problems (e.g., for DCE/cine-MRI), we propose to use a method for water/fat separation based on spectrally selective RF pulses. Methods: Two alternating RF pulses were used, that imposes a fat selective phase cycling over the phase encoding lines, which results in a spatial shift for fat in the reconstructed image, identical to thatmore » in CAIPIRINHA. Associated aliasing artefacts were resolved using the encoding power of a multi-element receiver array, analogous to SENSE. In vivo measurements were performed on a 1.5T clinical MR-scanner in a healthy volunteer's legs, using a four channel receiver coil. Gradient echo images were acquired with TE/TR = 2.3/4.7ms, flip angle 20°, FOV 45×22.5cm{sup 2}, matrix 480×216, slice thickness 5mm. Dixon images were acquired with TE,1/TE,2/TR=2.2/4.6/7ms. All image reconstructions were done in Matlab using the ReconFrame toolbox (Gyrotools, Zurich, CH). Results: RF pulse alternation yields a fat image offset from the water image. Hence the water and fat images fold over, which is resolved using in-plane SENSE reconstruction. Using the proposed technique, we achieved excellent water/fat separation comparable to Dixon images, while acquiring images at only one echo time. Conclusion: The proposed technique yields both inphase water and fat images at arbitrary echo times and requires only one measurement, thereby shortening the acquisition time by a factor 2. In future work the technique may be extended to a multi-band water/fat separation sequence that is able to achieve single point water/fat separation in multiple slices at once and hence yields higher speed-up factors.« less

  9. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  10. Multiband Fourier Analysis and Interstellar Reddening of the Variable Stars in the Globular Cluster NGC 6402 (M14)

    NASA Astrophysics Data System (ADS)

    Weinschenk, Sedrick; Murphy, Brian; Villiger, Nathan J.

    2018-01-01

    We present a detailed study of the variable stars in the globular cluster NGC 6402 (M14). Approximately 1500 B and V band images were collected from July 2016 to August 2017 using the SARA Consortium Jacobus Kaptyen 1-meter telescope located in the Canary Islands. Using difference image analysis, we were able to identify 145 probable variable stars, confirming the 133 previously known variables and adding 12 new variables. The variables consisted of 117 RR Lyrae stars, 18 long period variables, 2 eclipsing variables, 6 Cepheid variables, and 2 SX Phoenix variables. Of the RR Lyrae variables 55 were of fundamental mode RR0 stars, of which 18 exhibited the Blazhko effect, 57 were of 1st overtone RR1, of which 7 appear to exhibit the Blazhko effect, 1 2nd overtone RR2, and 2 double mode variables. We found an average period of 0.59016 days for RR0 stars and 0.30294 days for RR1 stars. Using the multiband light curves of both the RR0 and RR1 variables we found an average E(B-V) of 0.604 with a scatter of 0.15 magnitudes. Using Fourier decomposition of the RR Lyrae light curves we also determined the metallicity and distance of the NGC 6402.

  11. Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2012-01-01

    This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.

  12. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  13. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  14. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  15. Distributing coil elements in three dimensions enhances parallel transmission multiband RF performance: A simulation study in the human brain at 7 Tesla.

    PubMed

    Wu, Xiaoping; Tian, Jinfeng; Schmitter, Sebastian; Vaughan, J Tommy; Uğurbil, Kâmil; Van de Moortele, Pierre-François

    2016-06-01

    We explore the advantages of using a double-ring radiofrequency (RF) array and slice orientation to design parallel transmission (pTx) multiband (MB) pulses for simultaneous multislice (SMS) imaging with whole-brain coverage at 7 Tesla (T). A double-ring head array with 16 elements split evenly in two rings stacked in the z-direction was modeled and compared with two single-ring arrays consisting of 8 or 16 elements. The array performance was evaluated by designing band-specific pTx MB pulses with local specific absorption rate (SAR) control. The impact of slice orientations was also investigated. The double-ring array consistently and significantly outperformed the other two single-ring arrays, with peak local SAR reduced by up to 40% at a fixed excitation error of 0.024. For all three arrays, exciting sagittal or coronal slices yielded better RF performance than exciting axial or oblique slices. A double-ring RF array can be used to drastically improve SAR versus excitation fidelity tradeoff for pTx MB pulse design for brain imaging at 7 T; therefore, it is preferable against single-ring RF array designs when pursuing various biomedical applications of pTx SMS imaging. In comparing the stripline arrays, coronal and sagittal slices are more advantageous than axial and oblique slices for pTx MB pulses. Magn Reson Med 75:2464-2472, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. RARE/Turbo Spin Echo Imaging with Simultaneous MultiSlice Wave-CAIPI

    PubMed Central

    Eichner, Cornelius; Bhat, Himanshu; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable highly accelerated RARE/Turbo Spin Echo (TSE) imaging using Simultaneous MultiSlice (SMS) Wave-CAIPI acquisition with reduced g-factor penalty. Methods SMS Wave-CAIPI incurs slice shifts across simultaneously excited slices while playing sinusoidal gradient waveforms during the readout of each encoding line. This results in an efficient k-space coverage that spreads aliasing in all three dimensions to fully harness the encoding power of coil sensitivities. The novel MultiPINS radiofrequency (RF) pulses dramatically reduce the power deposition of multiband (MB) refocusing pulse, thus allowing high MB factors within the Specific Absorption Rate (SAR) limit. Results Wave-CAIPI acquisition with MultiPINS permits whole brain coverage with 1 mm isotropic resolution in 70 seconds at effective MB factor 13, with maximum and average g-factor penalties of gmax=1.34 and gavg=1.12, and without √R penalty. With blipped-CAIPI, the g-factor performance was degraded to gmax=3.24 and gavg=1.42; a 2.4-fold increase in gmax relative to Wave-CAIPI. At this MB factor, the SAR of the MultiBand and PINS pulses are 4.2 and 1.9 times that of the MultiPINS pulse, while the peak RF power are 19.4 and 3.9 times higher. Conclusion Combination of the two technologies, Wave-CAIPI and MultiPINS pulse, enables highly accelerated RARE/TSE imaging with low SNR penalty at reduced SAR. PMID:25640187

  17. Use of pattern recognition for unaliasing simultaneously acquired slices in simultaneous multislice MR fingerprinting.

    PubMed

    Jiang, Yun; Ma, Dan; Bhat, Himanshu; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L; Setsompop, Kawin; Griswold, Mark A

    2017-11-01

    The purpose of this study is to accelerate an MR fingerprinting (MRF) acquisition by using a simultaneous multislice method. A multiband radiofrequency (RF) pulse was designed to excite two slices with different flip angles and phases. The signals of two slices were driven to be as orthogonal as possible. The mixed and undersampled MRF signal was matched to two dictionaries to retrieve T 1 and T 2 maps of each slice. Quantitative results from the proposed method were validated with the gold-standard spin echo methods in a phantom. T 1 and T 2 maps of in vivo human brain from two simultaneously acquired slices were also compared to the results of fast imaging with steady-state precession based MRF method (MRF-FISP) with a single-band RF excitation. The phantom results showed that the simultaneous multislice imaging MRF-FISP method quantified the relaxation properties accurately compared to the gold-standard spin echo methods. T 1 and T 2 values of in vivo brain from the proposed method also matched the results from the normal MRF-FISP acquisition. T 1 and T 2 values can be quantified at a multiband acceleration factor of two using our proposed acquisition even in a single-channel receive coil. Further acceleration could be achieved by combining this method with parallel imaging or iterative reconstruction. Magn Reson Med 78:1870-1876, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  19. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  20. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  1. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  2. Spitzer Reveals Stellar 'Family Tree'

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] High resolution poster version

    Generations of stars can be seen in this new infrared portrait from NASA's Spitzer Space Telescope. In this wispy star-forming region, called W5, the oldest stars can be seen as blue dots in the centers of the two hollow cavities (other blue dots are background and foreground stars not associated with the region). Younger stars line the rims of the cavities, and some can be seen as pink dots at the tips of the elephant-trunk-like pillars. The white knotty areas are where the youngest stars are forming. Red shows heated dust that pervades the region's cavities, while green highlights dense clouds.

    W5 spans an area of sky equivalent to four full moons and is about 6,500 light-years away in the constellation Cassiopeia. The Spitzer picture was taken over a period of 24 hours.

    Like other massive star-forming regions, such as Orion and Carina, W5 contains large cavities that were carved out by radiation and winds from the region's most massive stars. According to the theory of triggered star-formation, the carving out of these cavities pushes gas together, causing it to ignite into successive generations of new stars.

    This image contains some of the best evidence yet for the triggered star-formation theory. Scientists analyzing the photo have been able to show that the ages of the stars become progressively and systematically younger with distance from the center of the cavities.

    This is a three-color composite showing infrared observations from two Spitzer instruments. Blue represents 3.6-micron light and green shows light of 8 microns, both captured by Spitzer's infrared array camera. Red is 24-micron light detected by Spitzer's multiband imaging photometer.

  3. Analysis of Multi-band Photometry of Violently Variable Gamma-Ray Sources

    NASA Astrophysics Data System (ADS)

    Kadowaki, Jennifer; Malkan, M. A.

    2013-01-01

    We studied the relationship between rapid variations in the jet intensities and changes in accretion disk activity of blazar subtype, Flat Spectrum Radio Quasar (FSRQ). Fifteen known FSRQs were specifically chosen for their prominent big blue bumps with redshifts near z=1, in order for the rest-frame UV to be redshifted into the blue-band pass. Flux changes for these 15 FSRQs were monitored for 15 observational nights in BVRI-bands and 20 nights in JHK-bands over a 12 month period using NASA's Fermi Gamma-ray Space Telescope, Lick Observatory's Nickel Telescope, and Kitt Peak National Observatory's 2.1 m Telescope. With 6.3’ x 6.3’ field of view for Nickel’s Direct Imaging Camera and 20’ x 20’ for Flamingos IR Imaging Spectrometer, approximately a half dozen, bright and non-variable stars were available to compare the concurrent changes in each of the quasar’s brightness. This process of differential photometry yielded photometric measurements of quasar brightness with 1-2% level precision. Light curves were then created for these 15 monitored quasars in optical, infrared, and gamma-ray energy bands. Dominating the redder emission spectrum due to non-thermal, synchrotron radiation and compton scattering of gamma-rays off high energy electrons, jet activity was compared to bluer spectral regions having strong accretion disk component with rest frame of approximately 2000 Angstroms. Most of the targeted FSRQs varied significantly over the 12 month monitoring period, with varying levels of fluctuations for each observed wavelength. Some correlations between gamma-ray and optical wavelengths were also present, which will be further discussed in the poster.

  4. First Results from the Lyman Alpha Galaxies in the Epoch of Reionization (LAGER) Survey: Cosmological Reionization at z ˜ 7

    NASA Astrophysics Data System (ADS)

    Zheng, Zhen-Ya; Wang, Junxian; Rhoads, James; Infante, Leopoldo; Malhotra, Sangeeta; Hu, Weida; Walker, Alistair R.; Jiang, Linhua; Jiang, Chunyan; Hibon, Pascale; Gonzalez, Alicia; Kong, Xu; Zheng, XianZhong; Galaz, Gaspar; Barrientos, L. Felipe

    2017-06-01

    We present the first results from the ongoing Lyman Alpha Galaxies in the Epoch of Reionization (LAGER) project, which is the largest narrowband survey for z ˜ 7 galaxies to date. Using a specially built narrowband filter NB964 for the superb large-area Dark Energy Camera (DECam) on the NOAO/CTIO 4 m Blanco telescope, LAGER has collected 34 hr NB964 narrowband imaging data in the 3 deg2 COSMOS field. We have identified 23 Lyα Emitter candidates at z = 6.9 in the central 2-deg2 region, where DECam and public COSMOS multi-band images exist. The resulting luminosity function (LF) can be described as a Schechter function modified by a significant excess at the bright end (four galaxies with L Lyα ˜ 1043.4±0.2 erg s-1). The number density at L Lyα ˜ 1043.4±0.2 erg s-1 is little changed from z = 6.6, while at fainter L Lyα it is substantially reduced. Overall, we see a fourfold reduction in Lyα luminosity density from z = 5.7 to z = 6.9. Combined with a more modest evolution of the continuum UV luminosity density, this suggests a factor of ˜3 suppression of Lyα by radiative transfer through the z ˜ 7 intergalactic medium (IGM). It indicates an IGM neutral fraction of x H I ˜ 0.4-0.6 (assuming Lyα velocity offsets of 100-200 km s-1). The changing shape of the Lyα LF between z ≲ 6.6 and z = 6.9 supports the hypothesis of ionized bubbles in a patchy reionization at z ˜ 7.

  5. Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.

    2016-12-01

    Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.

  6. Solar System Studies with the Space Infrared Telescope Facility (SIRTF)

    NASA Technical Reports Server (NTRS)

    Cruikshank, Dale P.; DeVincenzi, Donald L. (Technical Monitor)

    1998-01-01

    SIRTF (Space Infrared Telescope Facility) is the final element in NASA's 'Great Observatories' program. It consists of an 85-cm cryogenically-cooled observatory for infrared astronomy from space. SIRTF is scheduled for launch in late 2001 or early 2002 on a Delta rocket into a heliocentric orbit trailing the Earth. Data from SIRTF will be processed and disseminated to the community through the SIRTF Science Center (SSC) located at the Infrared Processing and Analysis Center (IPAC) at Caltech. Some 80/% of the total observing time (estimated at a minimum of 7500 hours of integration time per year for the mission lifetime of about 4 years) will be available to the scientific community at large through a system of refereed proposals. Three basic instruments are located in the SIRTF focal plane. The Multiband Imaging Photometer (MIPS), the Infrared Array Camera (IRAC), and the Infrared Spectrometer (IRS), taken together, provide imaging and spectroscopy from 3.5 to 160 microns. Among the solar system studies suited to SIRTF are the following: 1) spectroscopy and radiometry of small bodies from the asteroid main belt, through the Trojan clouds, to the Kuiper Disk; 2) dust distribution in the zodiacal cloud and the Earth's heliocentric dust ring; 3) spectroscopy and radiometry of comets; and 4) spectroscopy and radiometry of planets and their satellites. Searches for, and studies of dust disks around other stars, brown dwarfs, and superplanets will also be conducted with SIRTF. The SORTIE web site (http://ssc.ipac.caltech.edu/sirtf) contains important details and documentation on the project, the spacecraft, the telescope, instruments, and observing procedures. A community-wide workshop for solar system studies with SIRTF is in the planning stages by the author and Martha S. Hanner for the summer of 1999.

  7. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.

  8. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  9. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  10. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  11. Compact CPW-fed spiral-patch monopole antenna with tuneable frequency for multiband applications

    NASA Astrophysics Data System (ADS)

    Beigi, P.; Nourinia, J.; Zehforoosh, Y.

    2018-04-01

    A frequency reconfigurable monopole antenna with coplanar waveguide-fed with four switchable for multiband application is reported. The monopole antenna includes square-spiral patch and two L-shaped elements. The number of frequency resonances are increased by adding square spiral. In the reported antenna, two PIN diodes are used to achieve the multiband operation. PIN diodes embedded on the spiral patch can control the frequency resonance when they are forward-biased or in those off-state. The final designed antenna, with compact size of 20 × 20 ×1 mm3, has been fabricated on an inexpensive FR4 substrate. All experimental and simulation results are acceptable suggesting that the reported antenna is a good candidate for multiband applications.

  12. Fair comparison of complexity between a multi-band CAP and DMT for data center interconnects.

    PubMed

    Wei, J L; Sanchez, C; Giacoumidis, E

    2017-10-01

    We present, to the best of our knowledge, the first known detailed analysis and fair comparison of complexity of a 56 Gb/s multi-band carrierless amplitude and phase (CAP) and discrete multi-tone (DMT) over 80 km dispersion compensation fiber-free single-mode fiber links based on intensity modulation and direct detection for data center interconnects. We show that the matched finite impulse response filters and inverse fast Fourier transform (IFFT)/FFT take the majority of the complexity of the multi-band CAP and DMT, respectively. The choice of the multi-band CAP sub-band count and the DMT IFFT/FFT size makes significant impact on the system complexity or performance, and trade-off must be considered.

  13. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  14. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  15. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  16. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.

    PubMed

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-03-23

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.

  17. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    PubMed Central

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-01-01

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690

  18. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  19. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  20. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  1. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  2. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  3. Emirates eXploration Imager (EXI) Overview from the Emirates Mars Mission

    NASA Astrophysics Data System (ADS)

    Al Shamsi, M. R.; Wolff, M. J.; Jones, A. R.; Khoory, M. A.; Osterloo, M. M.; AlMheiri, S.; Reed, H.; Drake, G.

    2017-12-01

    The Emirates eXploration Imager (EXI) instrument is one of three scientific instruments abroad the Emirate Mars Mission (EMM) spacecraft, "Hope". The planned launch window opens in the summer of 2020, with the goal of this United Arab Emirates (UAE) mission to explore the dynamics of the Martian atmosphere through global spatial sampling which includes both diurnal and seasonal timescales. A particular focus of the mission is the improvement of our understanding of the global circulation in the lower atmosphere and the connections to the upward transport of energy of the escaping atmospheric particles from the upper atmosphere. This will be accomplished using three unique and complementary scientific instruments. The subject of this presentation, EXI, is a multi-band, camera capable of taking 12 megapixel images, which translates to a spatial resolution of better than 8 km with a well calibrated radiometric performance. EXI uses a selector wheel mechanism consisting of 6 discrete bandpass filters to sample the optical spectral region: 3 UV bands and 3 visible (RGB) bands. Atmospheric characterization will involve the retrieval of the ice optical depth using the 300-340 nm band, the dust optical depth in the 205-235nm range, and the column abundance of ozone with a band covering 245-275 nm. Radiometric fidelity is optimized while simplifying the optical design by separating the UV and VIS optical paths. The instrument is being developed jointly by the Laboratory for Atmospheric and Space Physics (LASP), University of California, Boulder, USA, and Mohammed Bin Rashid Space Centre (MBRSC), Dubai, UAE. The development of analysis software (reduction and retrieval) is being enabled through an EXI Observation Simulator. This package will produce EXI-like images using a combination of realistic viewing geometry (NAIF and a "reference trajectory") and simulated radiance values that include relevant atmospheric conditions and properties (Global Climate Model, DISORT). These noiseless images can then have instrument effects added (e.g., read-noise, dark current, pixel sensitivity, etc) to allow for the direct testing of data compression schemes, calibration pipeline processing, and atmospheric retrievals.

  4. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  5. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  6. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  7. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  8. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  9. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  10. Blur spot limitations in distal endoscope sensors

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Shechterman, Mark; Horesh, Nadav

    2006-02-01

    In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.

  11. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  12. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  13. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  14. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. Single and multi-band electromagnetic induced transparency-like metamaterials with coupled split ring resonators

    NASA Astrophysics Data System (ADS)

    Bagci, Fulya; Akaoglu, Baris

    2017-08-01

    We present a metamaterial configuration exhibiting single and multi-band electromagnetic induced transparency (EIT)-like properties. The unit cell of the single band EIT-like metamaterial consists of a multi-split ring resonator surrounded by a split ring resonator. The multi-split ring resonator acts as a quasi-dark or dark resonator, depending on the polarization of the incident wave, and the split ring resonator serves as the bright resonator. Combination of these two resonators results in a single band EIT-like transmission inside the stop band. EIT-like transmission phenomenon is also clearly observed in the measured transmission spectrum at almost the same frequencies for vertical and horizontal polarized waves, and the numerical results are verified for normal incidence. Moreover, multi-band transmission windows are created within a wide band by combining the two slightly different single band EIT-like metamaterial unit cells that exhibit two different coupling strengths inside a supercell configuration. Group indices as high as 123 for single band and 488 for tri-band transmission, accompanying with high transmission rates (over 80%), are achieved, rendering the metamaterial very suitable for multi-band slow light applications. It is shown that the group delay of the propagating wave can be increased and dynamically controlled by changing the polarization angle. Multi-band EIT-like transmission is also verified experimentally, and a good agreement with simulations is obtained. The proposed novel methodology for obtaining multi-band EIT, which takes advantage of a supercell configuration by hosting slightly different configured unit cells, can be utilized for easily formation and manipulation of multi-band transmission windows inside a stop band.

  17. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  18. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  19. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  20. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  1. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  2. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  3. Chiral anomaly enhancement and photoirradiation effects in multiband touching fermion systems

    NASA Astrophysics Data System (ADS)

    Ezawa, Motohiko

    2017-05-01

    Multiband touchings together with the emergence of fermions exhibiting linear dispersions have recently been predicted and realized in various materials. We first investigate the Adler-Bell-Jackiw chiral anomaly in these multiband touching semimetals when they are described by the pseudospin operator in high-dimensional representation. By evaluating the Chern number, we show that the anomalous Hall effect is enhanced depending on the magnitude of the pseudospin. It is also confirmed by the analysis of the Landau levels when magnetic field is applied. Namely, charge pumping occurs from one multiband touching point to another through multichannel Landau levels in the presence of parallel electric and magnetic fields. We also show a pair annihilation of two multiband touching points by photoirradiation. Furthermore, we propose generalizations of Dirac semimetals, multiple Weyl semimetals, and loop-nodal semimetals to those composed of fermions carrying pseudospins in high-dimensional representation. Finally we investigate the three-band touching protected by the C3 symmetry. We show that the three-band touching point is broken into two Weyl points by photoirradiation.

  4. Handheld hyperspectral imager system for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-08-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  5. Hand-held hyperspectral imager for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-03-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  6. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  7. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  8. 'No Organics' Zone Circles Pinwheel

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Pinwheel galaxy, otherwise known as Messier 101, sports bright reddish edges in this new infrared image from NASA's Spitzer Space Telescope. Research from Spitzer has revealed that this outer red zone lacks organic molecules present in the rest of the galaxy. The red and blue spots outside of the spiral galaxy are either foreground stars or more distant galaxies.

    The organics, called polycyclic aromatic hydrocarbons, are dusty, carbon-containing molecules that help in the formation of stars. On Earth, they are found anywhere combustion reactions take place, such as barbeque pits and exhaust pipes. Scientists also believe this space dust has the potential to be converted into the stuff of life.

    Spitzer found that the polycyclic aromatic hydrocarbons decrease in concentration toward the outer portion of the Pinwheel galaxy, then quickly drop off and are no longer detected at its very outer rim. According to astronomers, there's a threshold at the rim where the organic material is being destroyed by harsh radiation from stars. Radiation is more damaging at the far reaches of a galaxy because the stars there have less heavy metals, and metals dampen the radiation.

    The findings help researchers understand how stars can form in these harsh environments, where polycyclic aromatic hydrocarbons are lacking. Under normal circumstances, the polycyclic aromatic hydrocarbons help cool down star-forming clouds, allowing them to collapse into stars. In regions like the rim of the Pinwheel as well as the very early universe stars form without the organic dust. Astronomers don't know precisely how this works, so the rim of the Pinwheel provides them with a laboratory for examining the process relatively close up.

    In this image, infrared light with a wavelength of 3.6 microns is colored blue; 8-micron light is green; and 24-micron light is red. All three of Spitzer's instruments were used in the study: the infrared array camera, the multiband imaging photometer and the infrared spectrograph.

  9. The third data release of the Kilo-Degree Survey and associated data products

    NASA Astrophysics Data System (ADS)

    de Jong, Jelte T. A.; Verdois Kleijn, Gijs A.; Erben, Thomas; Hildebrandt, Hendrik; Kuijken, Konrad; Sikkema, Gert; Brescia, Massimo; Bilicki, Maciej; Napolitano, Nicola R.; Amaro, Valeria; Begeman, Kor G.; Boxhoorn, Danny R.; Buddelmeijer, Hugo; Cavuoti, Stefano; Getman, Fedor; Grado, Aniello; Helmich, Ewout; Huang, Zhuoyi; Irisarri, Nancy; La Barbera, Francesco; Longo, Giuseppe; McFarland, John P.; Nakajima, Reiko; Paolillo, Maurizio; Puddu, Emanuella; Radovich, Mario; Rifatto, Agatino; Tortora, Crescenzo; Valentijn, Edwin A.; Vellucci, Civita; Vriend, Willem-Jan; Amon, Alexandra; Blake, Chris; Choi, Ami; Conti, Ian Fenech; Gwyn, Stephen D. J.; Herbonnet, Ricardo; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Merten, Julian; Miller, Lance; Schneider, Peter; Viola, Massimo

    2017-08-01

    Context. The Kilo-Degree Survey (KiDS) is an ongoing optical wide-field imaging survey with the OmegaCAM camera at the VLT Survey Telescope. It aims to image 1500 square degrees in four filters (ugri). The core science driver is mapping the large-scale matter distribution in the Universe, using weak lensing shear and photometric redshift measurements. Further science cases include galaxy evolution, Milky Way structure, detection of high-redshift clusters, and finding rare sources such as strong lenses and quasars. Aims: Here we present the third public data release and several associated data products, adding further area, homogenized photometric calibration, photometric redshifts and weak lensing shear measurements to the first two releases. Methods: A dedicated pipeline embedded in the Astro-WISE information system is used for the production of the main release. Modifications with respect to earlier releases are described in detail. Photometric redshifts have been derived using both Bayesian template fitting, and machine-learning techniques. For the weak lensing measurements, optimized procedures based on the THELI data reduction and lensfit shear measurement packages are used. Results: In this third data release an additional 292 new survey tiles (≈300 deg2) stacked ugri images are made available, accompanied by weight maps, masks, and source lists. The multi-band catalogue, including homogenized photometry and photometric redshifts, covers the combined DR1, DR2 and DR3 footprint of 440 survey tiles (44 deg2). Limiting magnitudes are typically 24.3, 25.1, 24.9, 23.8 (5σ in a 2'' aperture) in ugri, respectively, and the typical r-band PSF size is less than 0.7''. The photometric homogenization scheme ensures accurate colours and an absolute calibration stable to ≈2% for gri and ≈3% in u. Separately released for the combined area of all KiDS releases to date are a weak lensing shear catalogue and photometric redshifts based on two different machine-learning techniques.

  10. Simultaneous measurement and modulation of multiple physiological parameters in the isolated heart using optical techniques

    PubMed Central

    Lee, Peter; Yan, Ping; Ewart, Paul; Kohl, Peter

    2012-01-01

    Whole-heart multi-parametric optical mapping has provided valuable insight into the interplay of electro-physiological parameters, and this technology will continue to thrive as dyes are improved and technical solutions for imaging become simpler and cheaper. Here, we show the advantage of using improved 2nd-generation voltage dyes, provide a simple solution to panoramic multi-parametric mapping, and illustrate the application of flash photolysis of caged compounds for studies in the whole heart. For proof of principle, we used the isolated rat whole-heart model. After characterising the blue and green isosbestic points of di-4-ANBDQBS and di-4-ANBDQPQ, respectively, two voltage and calcium mapping systems are described. With two newly custom-made multi-band optical filters, (1) di-4-ANBDQBS and fluo-4 and (2) di-4-ANBDQPQ and rhod-2 mapping are demonstrated. Furthermore, we demonstrate three-parameter mapping using di-4-ANBDQPQ, rhod-2 and NADH. Using off-the-shelf optics and the di-4-ANBDQPQ and rhod-2 combination, we demonstrate panoramic multi-parametric mapping, affording a 360° spatiotemporal record of activity. Finally, local optical perturbation of calcium dynamics in the whole heart is demonstrated using the caged compound, o-nitrophenyl ethylene glycol tetraacetic acid (NP-EGTA), with an ultraviolet light-emitting diode (LED). Calcium maps (heart loaded with di-4-ANBDQPQ and rhod-2) demonstrate successful NP-EGTA loading and local flash photolysis. All imaging systems were built using only a single camera. In conclusion, using novel 2nd-generation voltage dyes, we developed scalable techniques for multi-parametric optical mapping of the whole heart from one point of view and panoramically. In addition to these parameter imaging approaches, we show that it is possible to use caged compounds and ultraviolet LEDs to locally perturb electrophysiological parameters in the whole heart. PMID:22886365

  11. The Limited Duty/Chief Warrant Officer Professional Guidebook

    DTIC Science & Technology

    1985-01-01

    subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera

  12. Test Image of Earth Rocks by Mars Camera Stereo

    NASA Image and Video Library

    2010-11-16

    This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.

  13. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  14. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-05-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  15. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  16. Deep 1.1 mm-wavelength imaging of the GOODS-S field by AzTEC/ASTE - II. Redshift distribution and nature of the submillimetre galaxy population

    NASA Astrophysics Data System (ADS)

    Yun, Min S.; Scott, K. S.; Guo, Yicheng; Aretxaga, I.; Giavalisco, M.; Austermann, J. E.; Capak, P.; Chen, Yuxi; Ezawa, H.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Johnson, S.; Kawabe, R.; Kohno, K.; Lowenthal, J.; Miller, N.; Morrison, G.; Oshima, T.; Perera, T. A.; Salvato, M.; Silverman, J.; Tamura, Y.; Williams, C. C.; Wilson, G. W.

    2012-02-01

    We report the results of the counterpart identification and a detailed analysis of the physical properties of the 48 sources discovered in our deep 1.1-mm wavelength imaging survey of the Great Observatories Origins Deep Survey-South (GOODS-S) field using the AzTEC instrument on the Atacama Submillimeter Telescope Experiment. One or more robust or tentative counterpart candidate is found for 27 and 14 AzTEC sources, respectively, by employing deep radio continuum, Spitzer/Multiband Imaging Photometer for Spitzer and Infrared Array Camera, and Large APEX Bolometer Camera 870 μm data. Five of the sources (10 per cent) have two robust counterparts each, supporting the idea that these galaxies are strongly clustered and/or heavily confused. Photometric redshifts and star formation rates (SFRs) are derived by analysing ultraviolet(UV)-to-optical and infrared(IR)-to-radio spectral energy distributions (SEDs). The median redshift of zmed˜ 2.6 is similar to other earlier estimates, but we show that 80 per cent of the AzTEC-GOODS sources are at z≥ 2, with a significant high-redshift tail (20 per cent at z≥ 3.3). Rest-frame UV and optical properties of AzTEC sources are extremely diverse, spanning 10 mag in the i- and K-band photometry (a factor of 104 in flux density) with median values of i= 25.3 and K= 22.6 and a broad range of red colour (i-K= 0-6) with an average value of i-K≈ 3. These AzTEC sources are some of the most luminous galaxies in the rest-frame optical bands at z≥ 2, with inferred stellar masses M*= (1-30) × 1010 M⊙ and UV-derived SFRs of SFRUV≳ 101-3 M⊙ yr-1. The IR-derived SFR, 200-2000 M⊙ yr-1, is independent of z or M*. The resulting specific star formation rates, SSFR ≈ 1-100 Gyr-1, are 10-100 times higher than similar mass galaxies at z= 0, and they extend the previously observed rapid rise in the SSFR with redshift to z= 2-5. These galaxies have a SFR high enough to have built up their entire stellar mass within their Hubble time. We find only marginal evidence for an active galactic nucleus (AGN) contribution to the near-IR and mid-IR SEDs, even among the X-ray detected sources, and the derived M* and SFR show little dependence on the presence of an X-ray bright AGN.

  17. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    PubMed

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.

  18. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  19. A novel multi-band SAR data technique for fully automatic oil spill detection in the ocean

    NASA Astrophysics Data System (ADS)

    Del Frate, Fabio; Latini, Daniele; Taravat, Alireza; Jones, Cathleen E.

    2013-10-01

    With the launch of the Italian constellation of small satellites for the Mediterranean basin observation COSMO-SkyMed and the German TerraSAR-X missions, the delivery of very high-resolution SAR data to observe the Earth day or night has remarkably increased. In particular, also taking into account other ongoing missions such as Radarsat or those no longer working such as ALOS PALSAR, ERS-SAR and ENVISAT the amount of information, at different bands, available for users interested in oil spill analysis has become highly massive. Moreover, future SAR missions such as Sentinel-1 are scheduled for launch in the very next years while additional support can be provided by Uninhabited Aerial Vehicle (UAV) SAR systems. Considering the opportunity represented by all these missions, the challenge is to find suitable and adequate image processing multi-band procedures able to fully exploit the huge amount of data available. In this paper we present a new fast, robust and effective automated approach for oil-spill monitoring starting from data collected at different bands, polarizations and spatial resolutions. A combination of Weibull Multiplicative Model (WMM), Pulse Coupled Neural Network (PCNN) and Multi-Layer Perceptron (MLP) techniques is proposed for achieving the aforementioned goals. One of the most innovative ideas is to separate the dark spot detection process into two main steps, WMM enhancement and PCNN segmentation. The complete processing chain has been applied to a data set containing C-band (ERS-SAR, ENVISAT ASAR), X-band images (Cosmo-SkyMed and TerraSAR-X) and L-band images (UAVSAR) for an overall number of more than 200 images considered.

  20. Volunteers Help Decide Where to Point Mars Camera

    NASA Image and Video Library

    2015-07-22

    This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  2. Tri-band optical coherence tomography for lipid and vessel spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Yu, Luoqin; Kang, Jiqiang; Wang, Xie; Wei, Xiaoming; Chan, Kin-Tak; Lee, Nikki P.; Wong, Kenneth K. Y.

    2016-03-01

    Optical coherence tomography (OCT) has been utilized for various functional imaging applications. One of its highlights comes from spectroscopic imaging, which can simultaneously obtain both morphologic and spectroscopic information. Assisting diagnosis and therapeutic intervention of coronary artery disease is one of the major directions in spectroscopic OCT applications. Previously Tanaka et al. have developed a spectral domain OCT (SDOCT) to image lipid distribution within blood vessel [1]. In the meantime, Fleming et al. have demonstrated optical frequency domain imaging (OFDI) by a 1.3-μm swept source and quadratic discriminant analysis model [2]. However, these systems suffered from burdensome computation as the optical properties' variation was calculated from a single-band illumination that provided limited contrast. On the other hand, multi-band OCT facilitates contrast enhancement with separated wavelength bands, which further offers an easier way to distinguish different materials. Federici and Dubois [3] and Tsai and Chan [4] have demonstrated tri-band OCT systems to further enhance the image contrast. However, these previous work provided under-explored functional properties. Our group has reported a dual-band OCT system based on parametrically amplified Fourier domain mode-locked (FDML) laser with time multiplexing scheme [5] and a dual-band FDML laser OCT system with wavelength-division multiplexing [6]. Fiber optical parametric amplifier (OPA) can be ideally incorporated in multi-band spectroscopic OCT system as it has a broad amplification window and offers an additional output range at idler band, which is phase matched with the signal band. The sweeping ranges can thus overcome traditional wavelength bands that are limited by intra-cavity amplifiers in FDML lasers. Here, we combines the dual-band FDML laser together with fiber OPA, which consequently renders a simultaneous tri-band output at 1.3, 1.5, and 1.6 μm, for intravascular applications. Lipid and blood vessel distribution can be subsequently visualized with the tri-band OCT system by ex vivo experiments using porcine artery model with artificial lipid plaques.

  3. Multispectral image dissector camera flight test

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1973-01-01

    It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.

  4. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  5. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  6. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  7. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  8. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  9. Ground-based Spectroscopic Observation of Jovian Surface Structures by Using the Portable Spectrometer.

    NASA Astrophysics Data System (ADS)

    Iwasaki, K.; Ito, H.; Tabe, I.; Hirota, S.; Suzuki, H.

    2017-12-01

    Stripe patterns called belts or zones with various colors persist on Jovian surface. Anticyclonic vortices called an oval with various scales and colors are maintained and drifted in the boundary between zones and belts. Some ovals have different colors despite they are formed simultaneously in the same latitude region. Color changes of ovals after an interaction with other ovals have been also reported [Sánchez-Lavega et al., JGR, 2013]. The great red spot (GRS) is one of the most remarkable structures in the Jupiter and recognized since 300 years ago by sketch and photographic observations. Recently, NASA spacecraft, JUNO has revealed more complex and fine features with various colors. A close relationship between dynamics of Jovian atmosphere and local colors is well known [Sánchez-Lavega et al., JGR, 2013] though detailed mechanisms connecting them are not fully understood. Thus, the color of the each structures is thought to be one of the keys to investigate dynamics of the Jovian atmosphere. In this study, ground-based spectroscopic observations focusing on Jovian surface structures have been conducted since December 2015. The observation is carried out by combining a telescope with a small unit for spectroscopy consists of a CCD camera and a spectrometer. The spectrometer can measure a spectrum of a selected area within an image data simultaneously obtained by the CCD camera. Dimensions and weight of the spectroscopy are only 18cm × 14cm × 4cm and 300 g, respectively. This high portability of the spectrometer enables flexible observations; we can bring the spectrometer to a public observatory which has a large telescope in a location with high clear skies rate in desired observation period. The spectra are converted and corrected to an absolute radiance at the top of atmosphere, by using a radiometric calibration data obtained with an integrating sphere and measured extinction coefficients of the local atmosphere. In this talk, temporal variations in the spectrum of representative Jovian structures such as NEB, EZ, SEB, GRS observed by using the spectrometer during December 2015 to July 2017 are reported. A comparison with the past spaceborn observation conducted by the multiband camera onboard Cassini spacecraft [Ordonez-Etxeberria et al., Icarus, 2015] is also performed for verification of the observations.

  10. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  11. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  12. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  13. Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System

    NASA Astrophysics Data System (ADS)

    Stebner, K.; Wieden, A.

    2014-03-01

    Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.

  14. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  15. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  16. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  17. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  18. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    NASA Astrophysics Data System (ADS)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  19. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  20. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  1. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  2. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  3. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  4. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  5. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  6. Micro-Imagers for Spaceborne Cell-Growth Experiments

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen

    2006-01-01

    A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.

  7. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  8. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  9. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  10. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  11. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  12. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  13. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  14. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  15. Multiplane and Spectrally-Resolved Single Molecule Localization Microscopy with Industrial Grade CMOS cameras.

    PubMed

    Babcock, Hazen P

    2018-01-29

    This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.

  16. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  17. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  18. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  19. Fluorescent image tracking velocimeter

    DOEpatents

    Shaffer, Franklin D.

    1994-01-01

    A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.

  20. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  1. Investigation into the use of photoanthropometry in facial image comparison.

    PubMed

    Moreton, Reuben; Morley, Johanna

    2011-10-10

    Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  3. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  4. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  5. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  6. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  7. Lincoln Penny on Mars in Camera Calibration Target

    NASA Image and Video Library

    2012-09-10

    The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.

  8. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  9. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  10. An image-tube camera for cometary spectrography

    NASA Astrophysics Data System (ADS)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  11. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  12. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  13. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  14. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  15. FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.

  16. Bio-Inspired Sensing and Imaging of Polarization Information in Nature

    DTIC Science & Technology

    2008-05-04

    polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display

  17. Fast Dynamic 3D MRSI with Compressed Sensing and Multiband Excitation Pulses for Hyperpolarized 13C Studies

    PubMed Central

    Larson, Peder E. Z.; Hu, Simon; Lustig, Michael; Kerr, Adam B.; Nelson, Sarah J.; Kurhanewicz, John; Pauly, John M.; Vigneron, Daniel B.

    2010-01-01

    Hyperpolarized 13C MRSI can detect not only the uptake of the pre-polarized molecule but also its metabolic products in vivo, thus providing a powerful new method to study cellular metabolism. Imaging the dynamic perfusion and conversion of these metabolites provides additional tissue information but requires methods for efficient hyperpolarization usage and rapid acquisitions. In this work, we have developed a time-resolved 3D MRSI method for acquiring hyperpolarized 13C data by combining compressed sensing methods for acceleration and multiband excitation pulses to efficiently use the magnetization. This method achieved a 2 sec temporal resolution with full volumetric coverage of a mouse, and metabolites were observed for up to 60 sec following injection of hyperpolarized [1-13C]-pyruvate. The compressed sensing acquisition used random phase encode gradient blips to create a novel random undersampling pattern tailored to dynamic MRSI with sampling incoherency in four (time, frequency and two spatial) dimensions. The reconstruction was also tailored to dynamic MRSI by applying a temporal wavelet sparsifying transform in order to exploit the inherent temporal sparsity. Customized multiband excitation pulses were designed with a lower flip angle for the [1-13C]-pyruvate substrate given its higher concentration than its metabolic products ([1-13C]-lactate and [1-13C]-alanine), thus using less hyperpolarization per excitation. This approach has enabled the monitoring of perfusion and uptake of the pyruvate, and the conversion dynamics to lactate and alanine throughout a volume with high spatial and temporal resolution. PMID:20939089

  18. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  19. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  20. Kepler Ground-Based Photometry Proof-of-Concept

    NASA Technical Reports Server (NTRS)

    Brown, Timothy M.; Latham, D.; Howell, S.; Everett, M.

    2004-01-01

    We report on our efforts to evaluate the feasibility of using the 4-Shooter CCD camera on the 48-inch reflector at the Whipple Observatory to carry out a multi-band photometric survey of the Kepler target region. We also include recommendations for future work. We were assigned 36 nights with the &hooter during 2003 for this feasibility study. Most of the time during the first two dozen nights was dedicated to the development of procedures, test exposures, and a reconnaissance across the Kepler field. The final 12 nights in September and October 2003 were used for "production" observing in the middle of the Kepler field using the full complement of seven filters (SDSS u, g, r, i, z, plus our special Gred and D51 intermediate-band filters). Nine of these 12 nights were clear and photometric, and production observations were obtained at 109 pointings, corresponding to 14.6 square degrees.

  1. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  2. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  3. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  4. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  5. Fabrication and characterization of multiband solar cells based on highly mismatched alloys

    NASA Astrophysics Data System (ADS)

    López, N.; Braña, A. F.; García Núñez, C.; Hernández, M. J.; Cervera, M.; Martínez, M.; Yu, K. M.; Walukiewicz, W.; García, B. J.

    2015-10-01

    Multiband solar cells are one type of third generation photovoltaic devices in which an increase of the power conversion efficiency is achieved through the absorption of low energy photons while preserving a large band gap that determines the open circuit voltage. The ability to absorb photons from different parts of the solar spectrum originates from the presence of an intermediate energy band located within the band gap of the material. This intermediate band, acting as a stepping stone allows the absorption of low energy photons to transfer electrons from the valence band to the conduction band by a sequential two photons absorption process. It has been demonstrated that highly mismatched alloys offer a potential to be used as a model material system for practical realization of multiband solar cells. Dilute nitride GaAs1-xNx highly mismatched alloy with low mole fraction of N is a prototypical multiband semiconductor with a well-defined intermediate band. Currently, we are using chemical beam epitaxy to synthesize dilute nitride highly mismatched alloys. The materials are characterized by a variety of structural and optical methods to optimize their properties for multiband photovoltaic devices.

  6. A TYPE Ia SUPERNOVA AT REDSHIFT 1.55 IN HUBBLE SPACE TELESCOPE INFRARED OBSERVATIONS FROM CANDELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodney, Steven A.; Riess, Adam G.; Jones, David O.

    2012-02-10

    We report the discovery of a Type Ia supernova (SN Ia) at redshift z = 1.55 with the infrared detector of the Wide Field Camera 3 (WFC3-IR) on the Hubble Space Telescope (HST). This object was discovered in CANDELS imaging data of the Hubble Ultra Deep Field and followed as part of the CANDELS+CLASH Supernova project, comprising the SN search components from those two HST multi-cycle treasury programs. This is the highest redshift SN Ia with direct spectroscopic evidence for classification. It is also the first SN Ia at z > 1 found and followed in the infrared, providing amore » full light curve in rest-frame optical bands. The classification and redshift are securely defined from a combination of multi-band and multi-epoch photometry of the SN, ground-based spectroscopy of the host galaxy, and WFC3-IR grism spectroscopy of both the SN and host. This object is the first of a projected sample at z > 1.5 that will be discovered by the CANDELS and CLASH programs. The full CANDELS+CLASH SN Ia sample will enable unique tests for evolutionary effects that could arise due to differences in SN Ia progenitor systems as a function of redshift. This high-z sample will also allow measurement of the SN Ia rate out to z Almost-Equal-To 2, providing a complementary constraint on SN Ia progenitor models.« less

  7. Morphology and Structure of High-redshift Massive Galaxies in the CANDELS Fields

    NASA Astrophysics Data System (ADS)

    Guan-wen, Fang; Ze-sen, Lin; Xu, Kong

    2018-01-01

    Using the multi-band photometric data of all five CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) fields and the near-infrared (F125W and F160W) high-resolution images of HST WFC3 (Hubble Space Telescope Wide Field Camera 3), a quantitative study of morphology and structure of mass-selected galaxies is presented. The sample includes 8002 galaxies with a redshift 1 < z < 3 and stellar mass M*> 1010M⊙. Based on the Convolutional Neural Network (ConvNet) criteria, we classify the sample galaxies into SPHeroids (SPH), Early-Type Disks (ETD), Late-Type Disks (LTD), and IRRegulars (IRR) in different redshift bins. The findings indicate that the galaxy morphology and structure evolve with redshift up to z ∼ 3, from irregular galaxies in the high-redshift universe to the formation of the Hubble sequence dominated by disks and spheroids. For the same redshift interval, the median values of effective radii (re) of different morphological types are in a descending order: IRR, LTD, ETD, and SPH. But for the Sérsic index (n), the order is reversed (SPH, ETD, LTD, and IRR). In the meantime, the evolution of galaxy size (re) with the redshift is explored for the galaxies of different morphological types, and it is confirmed that their size will enlarge with time. However, such a phenomenon is not found in the relations between the redshift (1 < z < 3) and the mean axis ratio (b/a), as well as the Sérsic index (n).

  8. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  9. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  10. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  11. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  12. Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Frascati, Joe; Driggers, Ronald

    2018-04-01

    Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.

  13. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  14. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  15. Clinical evaluation of pixellated NaI:Tl and continuous LaBr 3:Ce, compact scintillation cameras for breast tumors imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.

    2007-02-01

    The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.

  16. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  17. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  18. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  19. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2017-12-08

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  2. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  3. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  4. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  5. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  6. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  7. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  8. Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging

    NASA Astrophysics Data System (ADS)

    Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.

    2006-12-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.

  9. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  10. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  11. Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Shiokawa, K.

    2016-12-01

    Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.

  12. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  13. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  14. Dense Region of Impact Craters

    NASA Image and Video Library

    2011-09-23

    NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.

  15. Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.

    PubMed

    Tabari, Abdulkadir M

    2007-01-01

    Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.

  16. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  17. Sensor noise camera identification: countering counter-forensics

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Chen, Mo

    2010-01-01

    In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

  18. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  19. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  20. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  1. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  2. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  3. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  4. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  5. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  6. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  7. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  8. Multiband optical variability of the blazar OJ 287 during its outbursts in 2015-2016

    NASA Astrophysics Data System (ADS)

    Gupta, Alok C.; Agarwal, Aditi; Mishra, Alka; Gaur, H.; Wiita, P. J.; Gu, M. F.; Kurtanidze, O. M.; Damljanovic, G.; Uemura, M.; Semkov, E.; Strigachev, A.; Bachev, R.; Vince, O.; Zhang, Z.; Villarroel, B.; Kushwaha, P.; Pandey, A.; Abe, T.; Chanishvili, R.; Chigladze, R. A.; Fan, J. H.; Hirochi, J.; Itoh, R.; Kanda, Y.; Kawabata, M.; Kimeridze, G. N.; Kurtanidze, S. O.; Latev, G.; Dimitrova, R. V. Muñoz; Nakaoka, T.; Nikolashvili, M. G.; Shiki, K.; Sigua, L. A.; Spassov, B.

    2017-03-01

    We present recent optical photometric observations of the blazar OJ 287 taken during 2015 September-2016 May. Our intense observations of the blazar started in 2015 November and continued until 2016 May and included detection of the large optical outburst in 2015 December that was predicted using the binary black hole model for OJ 287. For our observing campaign, we used a total of nine ground-based optical telescopes of which one is in Japan, one is in India, three are in Bulgaria, one is in Serbia, one is in Georgia, and two are in the USA. These observations were carried out in 102 nights with a total of ∼1000 image frames in BVRI bands, though the majority were in the R band. We detected a second comparably strong flare in 2016 March. In addition, we investigated multiband flux variations, colour variations, and spectral changes in the blazar on diverse time-scales as they are useful in understanding the emission mechanisms. We briefly discuss the possible physical mechanisms most likely responsible for the observed flux, colour, and spectral variability.

  9. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  10. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  11. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  12. Earth on the Horizon

    NASA Image and Video Library

    2004-03-13

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. Earth is the tiny white dot in the center. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. http://photojournal.jpl.nasa.gov/catalog/PIA05560

  13. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  14. PubMed Central

    Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.

    2017-01-01

    Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888

  15. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  16. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  17. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  18. iPhone 4s and iPhone 5s Imaging of the Eye

    PubMed Central

    Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.

    2017-01-01

    Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604

  19. SPITZER SEARCH FOR DUST DISKS AROUND CENTRAL STARS OF PLANETARY NEBULAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilikova, Jana; Chu Youhua; Gruendl, Robert A.

    2012-05-01

    Two types of dust disks have been discovered around white dwarfs (WDs): small dust disks within the Roche limits of their WDs and large dust disks around hot WDs extending to radial distances of 10-10{sup 2} AU. The majority of the latter WDs are central stars of planetary nebulae (CSPNs). We have therefore used archival Spitzer Infrared Array Camera (IRAC) and Multiband Imaging Photometer for Spitzer (MIPS) observations of PNs to search for CSPNs with IR excesses and to make a comparative investigation of dust disks around stars at different evolutionary stages. We have examined available images of 72 resolvedmore » PNs in the Spitzer archive and found 56 of them large enough for the CSPN to be resolved from the PN. Among these, only 42 CSPNs are visible in IRAC and/or MIPS images and selected for photometric measurements. From the spectral energy distributions (SEDs) of these CSPNs, we find 19 cases with clear IR excess. Of these, seven are [WC]-type stars, two have apparent visual companions that account for the observed excess emission, two are symbiotic CSPNs, and in eight cases the IR excess originates from an extended emitter, likely a dust disk. For some of these CSPNs, we have acquired follow-up Spitzer MIPS images, Infrared Spectrograph spectra, and Gemini NIRI and Michelle spectroscopic observations. The SEDs and spectra show a great diversity in the emission characteristics of the IR excesses, which may imply different mechanisms responsible for the excess emission. For CSPNs whose IR excesses originate from dust continuum, the most likely dust production mechanisms are (1) breakup of bodies in planetesimal belts through collisions and (2) formation of circumstellar dust disks through binary interactions. A better understanding of post-asymptotic giant branch binary evolution as well as debris disk evolution along with its parent star is needed to distinguish between these different origins. Future observations to better establish the physical parameters of the dust disks and the presence of companions are needed for models to discern between the possible dust production mechanisms.« less

  20. Portable, low-priced retinal imager for eye disease screening

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

    2014-02-01

    The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

  1. Optical Transient Monitor (OTM) for BOOTES Project

    NASA Astrophysics Data System (ADS)

    Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.

    2003-04-01

    The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.

  2. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  3. Recognizable-image selection for fingerprint recognition with a mobile-device camera.

    PubMed

    Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie

    2008-02-01

    This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.

  4. NPS assessment of color medical displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-02-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.

  5. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  6. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  7. A position and attitude vision measurement system for wind tunnel slender model

    NASA Astrophysics Data System (ADS)

    Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi

    2014-11-01

    A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.

  8. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  9. Star Formation in Henize 206

    NASA Image and Video Library

    2004-03-08

    This image from NASA Spitzer Space Telescope, shows the wispy filamentary structure of Henize 206, is a four-color composite mosaic created by combining data from an infrared array camera IRAC. The LMC is a small satellite galaxy gravitationally bound to our own Milky Way. Yet the gravitational effects are tearing the companion to shreds in a long-playing drama of 'intergalactic cannibalism.' These disruptions lead to a recurring cycle of star birth and star death. Astronomers are particularly interested in the LMC because its fractional content of heavy metals is two to five times lower than is seen in our solar neighborhood. [In this context, 'heavy elements' refer to those elements not present in the primordial universe. Such elements as carbon, oxygen and others are produced by nucleosynthesis and are ejected into the interstellar medium via mass loss by stars, including supernova explosions.] As such, the LMC provides a nearby cosmic laboratory that may resemble the distant universe in its chemical composition. The primary Spitzer image, showing the wispy filamentary structure of Henize 206, is a four-color composite mosaic created by combining data from an infrared array camera (IRAC) at near-infrared wavelengths and the mid-infrared data from a multiband imaging photometer (MIPS). Blue represents invisible infrared light at wavelengths of 3.6 and 4.5 microns. Note that most of the stars in the field of view radiate primarily at these short infrared wavelengths. Cyan denotes emission at 5.8 microns, green depicts the 8.0 micron light, and red is used to trace the thermal emission from dust at 24 microns. The separate instrument images are included as insets to the main composite. An inclined ring of emission dominates the central and upper regions of the image. This delineates a bubble of hot, x-ray emitting gas that was blown into space when a massive star died in a supernova explosion millions of years ago. The shock waves from that explosion impacted a cloud of nearby hydrogen gas, compressed it, and started a new generation of star formation. The death of one star led to the birth of many new stars. This is particularly evident in the MIPS inset, where the 24-micron emission peaks correspond to newly formed stars. The ultraviolet and visible-light photons from the new stars are absorbed by surrounding dust and re-radiated at longer infrared wavelengths, where it is detected by Spitzer. This emission nebula was cataloged by Karl Henize (HEN-eyes) while spending 1948-1951 in South Africa doing research for his Ph.D. dissertation at the University of Michigan. Henize later became a NASA astronaut and, at age 59, became the oldest rookie to fly on the Space Shuttle during an eight-day flight of the Challenger in 1985. He died just short of his 67th birthday in 1993 while attempting to climb the north face of Mount Everest, the world's highest peak. http://photojournal.jpl.nasa.gov/catalog/PIA05517

  10. Embedded processor extensions for image processing

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  11. A new compact, high sensitivity neutron imaging systema)

    NASA Astrophysics Data System (ADS)

    Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.

    2012-10-01

    We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.

  12. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  13. Novel Robotic Tools for Piping Inspection and Repair, Phase 1

    DTIC Science & Technology

    2014-02-13

    35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera

  14. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  15. Two-Camera Acquisition and Tracking of a Flying Target

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter

    2008-01-01

    A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

  16. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  17. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  18. NGEE Arctic Zero Power Warming PhenoCamera Images, Barrow, Alaska, 2016

    DOE Data Explorer

    Shawn Serbin; Andrew McMahon; Keith Lewin; Kim Ely; Alistair Rogers

    2016-11-14

    StarDot NetCam SC pheno camera images collected from the top of the Barrow, BEO Sled Shed. The camera was installed to monitor the BNL TEST group's prototype ZPW (Zero Power Warming) chambers during the growing season of 2016 (including early spring and late fall). Images were uploaded to the BNL FTP server every 10 minutes and renamed with the date and time of the image. See associated data "Zero Power Warming (ZPW) Chamber Prototype Measurements, Barrow, Alaska, 2016" http://dx.doi.org/10.5440/1343066.

  19. Low-cost laser speckle contrast imaging of blood flow using a webcam.

    PubMed

    Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.

  20. Low-cost laser speckle contrast imaging of blood flow using a webcam

    PubMed Central

    Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082

  1. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  2. Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.

    PubMed

    Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue

    2015-01-01

    A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.

  3. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  4. Developments in mercuric iodide gamma ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Beyerle, A.G.; Dolin, R.C.

    A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.

  5. Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision

    ERIC Educational Resources Information Center

    Prull, Matthew W.; Banks, William P.

    2005-01-01

    We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…

  6. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  7. Completely optical orientation determination for an unstabilized aerial three-line camera

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  8. VizieR Online Data Catalog: Spitzer obs. of warm dust in 83 debris disks (Ballering+, 2017)

    NASA Astrophysics Data System (ADS)

    Ballering, N. P.; Rieke, G. H.; Su, K. Y. L.; Gaspar, A.

    2018-04-01

    For our sample, we used the systems with a warm component found by Ballering+ (2013, J/ApJ/775/55), where "warm" was defined as warmer than 130K. All of these systems have data available from the Multiband Imaging Photometer for Spitzer (MIPS) at 24 and 70um and from the Spitzer Infrared Spectrograph (IRS). The selected 83 targets used for our analysis are listed in Table 1. (5 data files).

  9. Flow Interactions and Control

    DTIC Science & Technology

    2012-03-08

    to-Use 3-D Camera For Measurements in Turbulent Flow Fields B Thurow, Auburn Near Mid Far Conventional imaging Plenoptic imaging Conventional 2...depth-of-field and blur  Reduced aperture (restricted angular information) leads to low signal levels Lightfield Imaging  Plenoptic camera records

  10. Tenth Anniversary Image from Camera on NASA Mars Orbiter

    NASA Image and Video Library

    2012-02-29

    NASA Mars Odyssey spacecraft captured this image on Feb. 19, 2012, 10 years to the day after the camera recorded its first view of Mars. This image covers an area in the Nepenthes Mensae region north of the Martian equator.

  11. Full-Frame Reference for Test Photo of Moon

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images.

    Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

    The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.

  12. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  13. Plume propagation direction determination with SO2 cameras

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich

    2017-03-01

    SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.

  14. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  15. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  16. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    NASA Technical Reports Server (NTRS)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  17. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  18. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  19. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  20. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  1. Multiband electronic transport in α-Yb 1₋xSr x AlB 4 [ x = 0, 0.19(3)] single crystals

    DOE PAGES

    Ryu, Hyejin; Abeykoon, Milinda; Bozin, Emil; ...

    2016-08-19

    Here we report on the evidence for the multiband electronic transport in α- YbAlB 4 and α-Yb 0.81(2)Sr 0.19(3)AlB 4. Multiband transport reveals itself below 10 K in both compounds via Hall effect measurements, whereas anisotropic magnetic ground state sets in below 3 K in α-Yb 0.81(2)Sr 0.19(3)AlB 4. Our results show that Sr 2+ substitution enhances conductivity, but does not change the quasiparticle mass of bands induced by heavy fermion hybridization.

  2. Use of a color CMOS camera as a colorimeter

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  3. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  4. Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.

    PubMed

    Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S

    2001-04-01

    Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.

  5. High-resolution ophthalmic imaging system

    DOEpatents

    Olivier, Scot S.; Carrano, Carmen J.

    2007-12-04

    A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.

  6. Blood pulsation measurement using cameras operating in visible light: limitations.

    PubMed

    Koprowski, Robert

    2016-10-03

    The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).

  7. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  8. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  9. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  10. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  11. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  12. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  13. In vitro near-infrared imaging of occlusal dental caries using germanium enhanced CMOS camera.

    PubMed

    Lee, Chulsung; Darling, Cynthia L; Fried, Daniel

    2010-03-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  14. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  15. Recent technology and usage of plastic lenses in image taking objectives

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko

    2005-09-01

    Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.

  16. Curiosity ChemCam Removes Dust

    NASA Image and Video Library

    2013-04-08

    This pair of images taken a few minutes apart show how laser firing by NASA Mars rover Curiosity removes dust from the surface of a rock. The images were taken by the remote micro-imager camera in the laser-firing Chemistry and Camera ChemCam.

  17. What Is an Image?

    ERIC Educational Resources Information Center

    Zetie, K. P.

    2017-01-01

    In basic physics, often in their first year of study of the subject, students meet the concept of an image, for example when using pinhole cameras and finding the position of an image in a mirror. They are also familiar with the term in photography and design, through software which allows image manipulation, even "in-camera" on most…

  18. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  19. Capturing the plenoptic function in a swipe

    NASA Astrophysics Data System (ADS)

    Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi

    2016-09-01

    Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.

  20. Lymphoscintigraphy

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

Top