Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Automatic fog detection for public safety by using camera images
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Roth, Martin; Wauben, Wiel
2017-04-01
Fog and reduced visibility have considerable impact on the performance of road, maritime, and aeronautical transportation networks. The impact ranges from minor delays to more serious congestions or unavailability of the infrastructure and can even lead to damage or loss of lives. Visibility is traditionally measured manually by meteorological observers using landmarks at known distances in the vicinity of the observation site. Nowadays, distributed cameras facilitate inspection of more locations from one remote monitoring center. The main idea is, however, still deriving the visibility or presence of fog by an operator judging the scenery and the presence of landmarks. Visibility sensors are also used, but they are rather costly and require regular maintenance. Moreover, observers, and in particular sensors, give only visibility information that is representative for a limited area. Hence the current density of visibility observations is insufficient to give detailed information on the presence of fog. Cameras are more and more deployed for surveillance and security reasons in cities and for monitoring traffic along main transportation ways. In addition to this primary use of cameras, we consider cameras as potential sensors to automatically identify low visibility conditions. The approach that we follow is to use machine learning techniques to determine the presence of fog and/or to make an estimation of the visibility. For that purpose a set of features are extracted from the camera images such as the number of edges, brightness, transmission of the image dark channel, fractal dimension. In addition to these image features, we also consider meteorological variables such as wind speed, temperature, relative humidity, and dew point as additional features to feed the machine learning model. The results obtained with a training and evaluation set consisting of 10-minute sampled images for two KNMI locations over a period of 1.5 years by using decision trees methods to classify the dense fog conditions (i.e., visibility below 250 meters) show promising results (in terms of accuracy and type I and II errors). We are currently extending the approach to images obtained with traffic-monitoring cameras along highways. This is a first step to reach a solution that is closer to an operational artificial intelligence application for automatic fog alarm signaling for public safety.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications
NASA Astrophysics Data System (ADS)
Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.
2013-05-01
The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi
1997-04-01
Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.
NASA Astrophysics Data System (ADS)
Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari
2006-09-01
Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
Stargazing at 'Husband Hill Observatory' on Mars
NASA Technical Reports Server (NTRS)
2005-01-01
NASA's Mars Exploration Rover Spirit continues to take advantage of extra solar energy by occasionally turning its cameras upward for night sky observations. Most recently, Spirit made a series of observations of bright star fields from the summit of 'Husband Hill' in Gusev Crater on Mars. Scientists use the images to assess the cameras' sensitivity and to search for evidence of nighttime clouds or haze. The image on the left is a computer simulation of the stars in the constellation Orion. The next three images are actual views of Orion captured with Spirit's panoramic camera during exposures of 10, 30, and 60 seconds. Because Spirit is in the southern hemisphere of Mars, Orion appears upside down compared to how it would appear to viewers in the Northern Hemisphere of Earth. 'Star trails' in the longer exposures are a result of the planet's rotation. The faintest stars visible in the 60-second exposure are about as bright as the faintest stars visible with the naked eye from Earth (about magnitude 6 in astronomical terms). The Orion Nebula, famous as a nursery of newly forming stars, is also visible in these images. Bright streaks in some parts of the images aren't stars or meteors or unidentified flying objects, but are caused by solar and galactic cosmic rays striking the camera's detector. Spirit acquired these images with the panoramic camera on Martian day, or sol, 632 (Oct. 13, 2005) at around 45 minutes past midnight local time, using the camera's broadband filter (wavelengths of 739 nanometers plus or minus 338 nanometers).In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.
2005-10-01
As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.
The use of near-infrared photography to image fired bullets and cartridge cases.
Stein, Darrell; Yu, Jorn Chi Chung
2013-09-01
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
The Visible Imaging System (VIS) for the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.
1995-01-01
The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.
Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Calibration Target for Curiosity Arm Camera
2012-09-10
This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras
NASA Astrophysics Data System (ADS)
Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team
2018-01-01
NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.
A high resolution IR/visible imaging system for the W7-X limiter
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.
2016-11-01
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.
A high resolution IR/visible imaging system for the W7-X limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less
Use of cameras for monitoring visibility impairment
NASA Astrophysics Data System (ADS)
Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie
2018-02-01
Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.
Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
NASA Astrophysics Data System (ADS)
Huang, Hua-Wei; Zhang, Yang
2008-08-01
An attempt has been made to characterize the colour spectrum of methane flame under various burning conditions using RGB and HSV colour models instead of resolving the real physical spectrum. The results demonstrate that each type of flame has its own characteristic distribution in both the RGB and HSV space. It has also been observed that the averaged B and G values in the RGB model represent well the CH* and C*2 emission of methane premixed flame. Theses features may be utilized for flame measurement and monitoring. The great advantage of using a conventional camera for monitoring flame properties based on the colour spectrum is that it is readily available, easy to interface with a computer, cost effective and has certain spatial resolution. Furthermore, it has been demonstrated that a conventional digital camera is able to image flame not only in the visible spectrum but also in the infrared. This feature is useful in avoiding the problem of image saturation typically encountered in capturing the very bright sooty flames. As a result, further digital imaging processing and quantitative information extraction is possible. It has been identified that an infrared image also has its own distribution in both the RGB and HSV colour space in comparison with a flame image in the visible spectrum.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
New Horizons Tracks an Asteroid
2007-04-02
The two pots in this image are a composite of two images of asteroid 2002 JF56 taken on June 11 and June 12, 2006, with the Multispectral Visible Imaging Camera component of the New Horizons Ralph imager.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
Optimal design of an earth observation optical system with dual spectral and high resolution
NASA Astrophysics Data System (ADS)
Yan, Pei-pei; Jiang, Kai; Liu, Kai; Duan, Jing; Shan, Qiusha
2017-02-01
With the increasing demand of the high-resolution remote sensing images by military and civilians, Countries around the world are optimistic about the prospect of higher resolution remote sensing images. Moreover, design a visible/infrared integrative optic system has important value in earth observation. Because visible system can't identify camouflage and recon at night, so we should associate visible camera with infrared camera. An earth observation optical system with dual spectral and high resolution is designed. The paper mainly researches on the integrative design of visible and infrared optic system, which makes the system lighter and smaller, and achieves one satellite with two uses. The working waveband of the system covers visible, middle infrared (3-5um). Dual waveband clear imaging is achieved with dispersive RC system. The focal length of visible system is 3056mm, F/# is 10.91. And the focal length of middle infrared system is 1120mm, F/# is 4. In order to suppress the middle infrared thermal radiation and stray light, the second imaging system is achieved and the narcissus phenomenon is analyzed. The system characteristic is that the structure is simple. And the especial requirements of the Modulation Transfer Function (MTF), spot, energy concentration, and distortion etc. are all satisfied.
Tan, Tai Ho; Williams, Arthur H.
1985-01-01
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasmas generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Tan, T.H.; Williams, A.H.
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasma generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory
NASA Technical Reports Server (NTRS)
Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.
2005-01-01
Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Thermal-to-visible transducer (TVT) for thermal-IR imaging
NASA Astrophysics Data System (ADS)
Flusberg, Allen; Swartz, Stephen; Huff, Michael; Gross, Steven
2008-04-01
We have been developing a novel thermal-to-visible transducer (TVT), an uncooled thermal-IR imager that is based on a Fabry-Perot Interferometer (FPI). The FPI-based IR imager can convert a thermal-IR image to a video electronic image. IR radiation that is emitted by an object in the scene is imaged onto an IR-absorbing material that is located within an FPI. Temperature variations generated by the spatial variations in the IR image intensity cause variations in optical thickness, modulating the reflectivity seen by a probe laser beam. The reflected probe is imaged onto a visible array, producing a visible image of the IR scene. This technology can provide low-cost IR cameras with excellent sensitivity, low power consumption, and the potential for self-registered fusion of thermal-IR and visible images. We will describe characteristics of requisite pixelated arrays that we have fabricated.
CubeSat Nighttime Earth Observations
NASA Astrophysics Data System (ADS)
Pack, D. W.; Hardy, B. S.; Longcore, T.
2017-12-01
Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.
NASA Astrophysics Data System (ADS)
Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.
2012-10-01
Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.
2001 Mars Odyssey Images Earth (Visible and Infrared)
NASA Technical Reports Server (NTRS)
2001-01-01
2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) acquired these images of the Earth using its visible and infrared cameras as it left the Earth. The visible image shows the thin crescent viewed from Odyssey's perspective. The infrared image was acquired at exactly the same time, but shows the entire Earth using the infrared's 'night-vision' capability. Invisible light the instrument sees only reflected sunlight and therefore sees nothing on the night side of the planet. In infrared light the camera observes the light emitted by all regions of the Earth. The coldest ground temperatures seen correspond to the nighttime regions of Antarctica; the warmest temperatures occur in Australia. The low temperature in Antarctica is minus 50 degrees Celsius (minus 58 degrees Fahrenheit); the high temperature at night in Australia 9 degrees Celsius(48.2 degrees Fahrenheit). These temperatures agree remarkably well with observed temperatures of minus 63 degrees Celsius at Vostok Station in Antarctica, and 10 degrees Celsius in Australia. The images were taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19,2001 as the Odyssey spacecraft left Earth.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110
2002-04-08
STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.
Kang, Han Gyu; Lee, Ho-Young; Kim, Kyeong Min; Song, Seong-Hyun; Hong, Gun Chul; Hong, Seong Jong
2017-01-01
The aim of this study is to integrate NIR, gamma, and visible imaging tools into a single endoscopic system to overcome the limitation of NIR using gamma imaging and to demonstrate the feasibility of endoscopic NIR/gamma/visible fusion imaging for sentinel lymph node (SLN) mapping with a small animal. The endoscopic NIR/gamma/visible imaging system consists of a tungsten pinhole collimator, a plastic focusing lens, a BGO crystal (11 × 11 × 2 mm 3 ), a fiber-optic taper (front = 11 × 11 mm 2 , end = 4 × 4 mm 2 ), a 122-cm long endoscopic fiber bundle, an NIR emission filter, a relay lens, and a CCD camera. A custom-made Derenzo-like phantom filled with a mixture of 99m Tc and indocyanine green (ICG) was used to assess the spatial resolution of the NIR and gamma images. The ICG fluorophore was excited using a light-emitting diode (LED) with an excitation filter (723-758 nm), and the emitted fluorescence photons were detected with an emission filter (780-820 nm) for a duration of 100 ms. Subsequently, the 99m Tc distribution in the phantom was imaged for 3 min. The feasibility of in vivo SLN mapping with a mouse was investigated by injecting a mixture of 99m Tc-antimony sulfur colloid (12 MBq) and ICG (0.1 mL) into the right paw of the mouse (C57/B6) subcutaneously. After one hour, NIR, gamma, and visible images were acquired sequentially. Subsequently, the dissected SLN was imaged in the same way as the in vivo SLN mapping. The NIR, gamma, and visible images of the Derenzo-like phantom can be obtained with the proposed endoscopic imaging system. The NIR/gamma/visible fusion image of the SLN showed a good correlation among the NIR, gamma, and visible images both for the in vivo and ex vivo imaging. We demonstrated the feasibility of the integrated NIR/gamma/visible imaging system using a single endoscopic fiber bundle. In future, we plan to investigate miniaturization of the endoscope head and simultaneous NIR/gamma/visible imaging with dichroic mirrors and three CCD cameras. © 2016 American Association of Physicists in Medicine.
Staking out Curiosity Landing Site
2012-08-09
The geological context for the landing site of NASA Curiosity rover is visible in this image mosaic obtained by the High-Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Pattern recognition applied to infrared images for early alerts in fog
NASA Astrophysics Data System (ADS)
Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien
2014-09-01
Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.
Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)
NASA Technical Reports Server (NTRS)
Frank, L. A.
1997-01-01
The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
GETTING TO THE HEART OF A GALAXY
NASA Technical Reports Server (NTRS)
2002-01-01
This collage of images in visible and infrared light reveals how the barred spiral galaxy NGC 1365 is feeding material into its central region, igniting massive star birth and probably causing its bulge of stars to grow. The material also is fueling a black hole in the galaxy's core. A galaxy's bulge is a central, football-shaped structure composed of stars, gas, and dust. The black-and-white image in the center, taken by a ground-based telescope, displays the entire galaxy. But the telescope's resolution is not powerful enough to reveal the flurry of activity in the galaxy's hub. The blue box in the galaxy's central region outlines the area observed by the NASA Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The red box pinpoints a narrower view taken by the Hubble telescope's infrared camera, the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). A barred spiral is characterized by a lane of stars, gas, and dust slashing across a galaxy's central region. It has a small bulge that is dominated by a disk of material. The spiral arms begin at both ends of the bar. The bar is funneling material into the hub, which triggers star formation and feeds the bulge. The visible-light picture at upper left is a close-up view of the galaxy's hub. The bright yellow orb is the nucleus. The dark material surrounding the orb is gas and dust that is being funneled into the central region by the bar. The blue regions pinpoint young star clusters. In the infrared image at lower right, the Hubble telescope penetrates the dust seen in the WFPC2 picture to reveal more clusters of young stars. The bright blue dots represent young star clusters; the brightest of the red dots are young star clusters enshrouded in dust and visible only in the infrared image. The fainter red dots are older star clusters. The WFPC2 image is a composite of three filters: near-ultraviolet (3327 Angstroms), visible (5552 Angstroms), and near-infrared (8269 Angstroms). The NICMOS image, taken at a wavelength of 16,000 Angstroms, was combined with the visible and near-infrared wavelengths taken by WFPC2. The WFPC2 image was taken in January 1996; the NICMOS data were taken in April 1998. Credits for the ground-based image: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for the WFPC2 image: NASA and John Trauger (Jet Propulsion Laboratory) Credits for the NICMOS image: NASA, ESA, and C. Marcella Carollo (Columbia University)
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
Lunar Reconnaissance Orbiter Camera (LROC) instrument overview
Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.
2010-01-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Low-cost panoramic infrared surveillance system
NASA Astrophysics Data System (ADS)
Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George
2017-05-01
A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.
NASA Astrophysics Data System (ADS)
Gonzaga, S.; et al.
2011-03-01
ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.
2012-09-06
Tracks from the first drives of NASA Curiosity rover are visible in this image captured by the High-Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter. The rover is seen where the tracks end.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
Augmented reality in laser laboratories
NASA Astrophysics Data System (ADS)
Quercioli, Franco
2018-05-01
Laser safety glasses block visibility of the laser light. This is a big nuisance when a clear view of the beam path is required. A headset made up of a smartphone and a viewer can overcome this problem. The user looks at the image of the real world on the cellphone display, captured by its rear camera. An unimpeded and safe sight of the laser beam is then achieved. If the infrared blocking filter of the smartphone camera is removed, the spectral sensitivity of the CMOS image sensor extends in the near infrared region up to 1100 nm. This substantial improvement widens the usability of the device to many laser systems for industrial and medical applications, which are located in this spectral region. The paper describes this modification of a phone camera to extend its sensitivity beyond the visible and make a true augmented reality laser viewer.
Method and apparatus for calibrating a display using an array of cameras
NASA Technical Reports Server (NTRS)
Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
NASA Astrophysics Data System (ADS)
Delaney, John K.; Zeibel, Jason G.; Thoury, Mathieu; Littleton, Roy; Morales, Kathryn M.; Palmer, Michael; de la Rie, E. René
2009-07-01
Reflectance imaging spectroscopy, the collection of images in narrow spectral bands, has been developed for remote sensing of the Earth. In this paper we present findings on the use of imaging spectroscopy to identify and map artist pigments as well as to improve the visualization of preparatory sketches. Two novel hyperspectral cameras, one operating from the visible to near-infrared (VNIR) and the other in the shortwave infrared (SWIR), have been used to collect diffuse reflectance spectral image cubes on a variety of paintings. The resulting image cubes (VNIR 417 to 973 nm, 240 bands, and SWIR 970 to 1650 nm, 85 bands) were calibrated to reflectance and the resulting spectra compared with results from a fiber optics reflectance spectrometer (350 to 2500 nm). The results show good agreement between the spectra acquired with the hyperspectral cameras and those from the fiber reflectance spectrometer. For example, the primary blue pigments and their distribution in Picasso's Harlequin Musician (1924) are identified from the reflectance spectra and agree with results from X-ray fluorescence data and dispersed sample analysis. False color infrared reflectograms, obtained from the SWIR hyperspectral images, of extensively reworked paintings such as Picasso's The Tragedy (1903) are found to give improved visualization of changes made by the artist. These results show that including the NIR and SWIR spectral regions along with the visible provides for a more robust identification and mapping of artist pigments than using visible imaging spectroscopy alone.
NASA Astrophysics Data System (ADS)
Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung
2018-06-01
The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.
NASA Astrophysics Data System (ADS)
Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas
2011-03-01
This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA MONTAGE
NASA Technical Reports Server (NTRS)
2002-01-01
This picture, taken in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2), represents a sweeping view of the 30 Doradus Nebula. But Hubble's infrared camera - the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) - has probed deeper into smaller regions of this nebula to unveil the stormy birth of massive stars. The montages of images in the upper left and upper right represent this deeper view. Each square in the montages is 15.5 light-years (19 arcseconds) across. The brilliant cluster R136, containing dozens of very massive stars, is at the center of this image. The infrared and visible-light views reveal several dust pillars that point toward R136, some with bright stars at their tips. One of them, at left in the visible-light image, resembles a fist with an extended index finger pointing directly at R136. The energetic radiation and high-speed material emitted by the massive stars in R136 are responsible for shaping the pillars and causing the heads of some of them to collapse, forming new stars. The infrared montage at upper left is enlarged in an accompanying image. Credits for NICMOS montages: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
2011-08-03
Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.
2009-11-03
Bright sunlight on Rhea shows off the cratered surface of Saturn second largest moon in this image captured by NASA Cassini Orbiter. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 21, 2009.
Phoenix Lander Amid Disappearing Spring Ice
2010-01-11
NASA Phoenix Mars Lander, its backshell and heatshield visible within this enhanced-color image of the Phoenix landing site taken on Jan. 6, 2010 by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A
2016-11-01
A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
NASA Astrophysics Data System (ADS)
Saari, H.; Akujärvi, A.; Holmlund, C.; Ojanen, H.; Kaivosoja, J.; Nissinen, A.; Niemeläinen, O.
2017-10-01
The accurate determination of the quality parameters of crops requires a spectral range from 400 nm to 2500 nm (Kawamura et al., 2010, Thenkabail et al., 2002). Presently the hyperspectral imaging systems that cover this wavelength range consist of several separate hyperspectral imagers and the system weight is from 5 to 15 kg. In addition the cost of the Short Wave Infrared (SWIR) cameras is high ( 50 k€). VTT has previously developed compact hyperspectral imagers for drones and Cubesats for Visible and Very near Infrared (VNIR) spectral ranges (Saari et al., 2013, Mannila et al., 2013, Näsilä et al., 2016). Recently VTT has started to develop a hyperspectral imaging system that will enable imaging simultaneously in the Visible, VNIR, and SWIR spectral bands. The system can be operated from a drone, on a camera stand, or attached to a tractor. The targeted main applications of the DroneKnowledge hyperspectral system are grass, peas, and cereals. In this paper the characteristics of the built system are shortly described. The system was used for spectral measurements of wheat, several grass species and pea plants fixed to the camera mount in the test fields in Southern Finland and in the green house. The wheat, grass and pea field measurements were also carried out using the system mounted on the tractor. The work is part of the Finnish nationally funded DroneKnowledge - Towards knowledge based export of small UAS remote sensing technology project.
Li, Junfeng; Wan, Xiaoxia
2018-01-15
To enrich the contents of digital archive and to guide the copy and restoration of colored relics, non-invasive methods for extraction of painting boundary and identification of pigment composition are proposed in this study based on the visible spectral images of colored relics. Superpixel concept is applied for the first time to the field of oversegmentation of visible spectral images and implemented on the visible spectral images of colored relics to extract their painting boundary. Since different pigments are characterized by their own spectrum and the same kind of pigment has the similar geometric profile in spectrum, an automatic identification method is established by comparing the proximity between the geometric profiles of the unknown spectrum from each superpixel and the pre-known spectrum from a deliberately prepared database. The methods are validated using the visible spectral images of the ancient wall paintings in Mogao Grottoes. By the way, the visible spectral images are captured by a multispectral imaging system consisting of two broadband filters and a RGB camera with high spatial resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
Covering Jupiter from Earth and Space
2011-08-03
Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, H. B., E-mail: rayhb@ornl.gov; Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831; Biewer, T. M.
2016-11-15
Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pelliclemore » beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.« less
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
Stephey, L.; Wurden, G. A.; Schmitz, O.; ...
2016-08-08
A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less
ERIC Educational Resources Information Center
Fisher, Diane K.; Novati, Alexander
2009-01-01
On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…
Frangioni, John V
2013-06-25
A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.
Fast visible imaging of turbulent plasma in TORPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Diallo, A.; Fasoli, A.
2008-10-15
Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
2005-01-17
This Cassini image shows predominantly the impact-scarred leading hemisphere of Saturn's icy moon Rhea (1,528 kilometers, or 949 miles across). The image was taken in visible light with the Cassini spacecraft narrow angle camera on Dec. 12, 2004, at a distance of 2 million kilometers (1.2 million miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 30 degrees. The image scale is about 12 kilometers (7.5 miles) per pixel. The image has been magnified by a factor of two and contrast enhanced to aid visibility. http://photojournal.jpl.nasa.gov/catalog/PIA06564
Iodine filter imaging system for subtraction angiography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.
1993-11-01
A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2016-10-01
In this paper, the potential capability of short-wavelength infrared laser gated-viewing for penetrating the pyrotechnic effects smoke and light/heat has been investigated by evaluating data from conducted field trials. The potential of thermal infrared cameras for this purpose has also been considered and the results have been compared to conventional visible cameras as benchmark. The application area is the use in soccer stadiums where pyrotechnics are illegally burned in dense crowds of people obstructing visibility of stadium safety staff and police forces into the involved section of the stadium. Quantitative analyses have been carried out to identify sensor performances. Further, qualitative image comparisons have been presented to give impressions of image quality during the disruptive effects of burning pyrotechnics.
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Scientific CCD technology at JPL
NASA Technical Reports Server (NTRS)
Janesick, J.; Collins, S. A.; Fossum, E. R.
1991-01-01
Charge-coupled devices (CCD's) were recognized for their potential as an imaging technology almost immediately following their conception in 1970. Twenty years later, they are firmly established as the technology of choice for visible imaging. While consumer applications of CCD's, especially the emerging home video camera market, dominated manufacturing activity, the scientific market for CCD imagers has become significant. Activity of the Jet Propulsion Laboratory and its industrial partners in the area of CCD imagers for space scientific instruments is described. Requirements for scientific imagers are significantly different from those needed for home video cameras, and are described. An imager for an instrument on the CRAF/Cassini mission is described in detail to highlight achieved levels of performance.
MARS PATHFINDER CAMERA TEST IN SAEF-2
NASA Technical Reports Server (NTRS)
1996-01-01
In the Spacecraft Assembly and Encapsulation Facility-2 (SAEF-2), workers from the Jet Propulsion Laboratory (JPL) are conducting a systems test of the imager for the Mars Pathfinder. The imager (white and metallic cylindrical element close to hand of worker at left) is a specially designed camera featuring a stereo- imaging system with color capability provided by a set of selectable filters. It is mounted atop an extendable mast on the Pathfinder lander. Visible to the far left is the small rover which will be deployed from the lander to explore the Martian surface. Transmitting back to Earth images of the trail left by the rover will be one of the mission objectives for the imager. To the left of the worker standing near the imager is the mast for the low-gain antenna; the round high-gain antenna is to the right. Visible in the background is the cruise stage that will carry the Pathfinder on a direct trajectory to Mars. The Mars Pathfinder is one of two Mars-bound spacecraft slated for launch aboard Delta II expendable launch vehicles this year.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1990-01-01
Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Improved docking alignment system
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1988-01-01
Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Lee, Onseok; Park, Sunup; Kim, Jaeyoung; Oh, Chilhwan
2017-11-01
The visual scoring method has been used as a subjective evaluation of pigmentary skin disorders. Severity of pigmentary skin disease, especially melasma, is evaluated using a visual scoring method, the MASI (melasma area severity index). This study differentiates between epidermal and dermal pigmented disease. The study was undertaken to determine methods to quantitatively measure the severity of pigmentary skin disorders under ultraviolet illumination. The optical imaging system consists of illumination (white LED, UV-A lamp) and image acquisition (DSLR camera, air cooling CMOS CCD camera). Each camera is equipped with a polarizing filter to remove glare. To analyze images of visible and UV light, images are divided into frontal, cheek, and chin regions of melasma patients. Each image must undergo image processing. To reduce the curvature error in facial contours, a gradient mask is used. The new method of segmentation of front and lateral facial images is more objective for face-area-measurement than the MASI score. Image analysis of darkness and homogeneity is adequate to quantify the conventional MASI score. Under visible light, active lesion margins appear in both epidermal and dermal melanin, whereas melanin is found in the epidermis under UV light. This study objectively analyzes severity of melasma and attempts to develop new methods of image analysis with ultraviolet optical imaging equipment. Based on the results of this study, our optical imaging system could be used as a valuable tool to assess the severity of pigmentary skin disease. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Robust Behavior Recognition in Intelligent Surveillance Environments.
Batchuluun, Ganbayar; Kim, Yeong Gon; Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2016-06-30
Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR) cameras, thermal cameras (based on medium-wavelength infrared (MWIR), and long-wavelength infrared (LWIR) light) have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance) for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.
Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)
NASA Astrophysics Data System (ADS)
Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.
2012-10-01
A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Field trials for determining the visible and infrared transmittance of screening smoke
NASA Astrophysics Data System (ADS)
Sánchez Oliveros, Carmen; Santa-María Sánchez, Guillermo; Rosique Pérez, Carlos
2009-09-01
In order to evaluate the concealment capability of smoke, the Countermeasures Laboratory of the Institute of Technology "Marañosa" (ITM) has done a set of tests for measuring the transmittances of multispectral smoke tins in several bands of the electromagnetic spectrum. The smoke composition based on red phosphorous has been developed and patented by this laboratory as a part of a projectile development. The smoke transmittance was measured by means of thermography as well as spectroradiometry. Black bodies and halogen lamps were used as infrared and visible source of radiation. The measurements were carried out in June of 2008 at the Marañosa field (Spain) with two MWIR cameras, two LWIR cameras, one CCD visible camera, one CVF IR spectroradiometer covering the interval 1.5 to 14 microns and one array silicon based spectroradiometer for the 0.2 to 1.1 μm spectra. The transmittance and dimensions of the smoke screen were characterized in the visible band, MWIR (3 - 5 μm and LWIR (8 - 12 μm) regions. The size of the screen was about 30 meters wide and 5 meters high. The transmittances in the IR bands were about 0.3 and better than 0.1 in the visible one. The screens showed to be effective over the time of persistence for all of the tests. The results obtained from the imaging and non-imaging systems were in good accordance. The meteorological conditions during tests such as the wind speed are determinant for the use of this kind of optical countermeasures.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: Temperature Map This image composite shows comet Tempel 1 in visible (left) and infrared (right) light (figure 1). The infrared picture highlights the warm, or sunlit, side of the comet, where NASA's Deep Impact probe later hit. These data were acquired about six minutes before impact. The visible image was taken by the medium-resolution camera on the mission's flyby spacecraft, and the infrared data were acquired by the flyby craft's infrared spectrometer.Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
MISR Images Forest Fires and Hurricane
NASA Technical Reports Server (NTRS)
2000-01-01
These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.
In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.govChallenges and solutions for high performance SWIR lens design
NASA Astrophysics Data System (ADS)
Gardner, M. C.; Rogers, P. J.; Wilde, M. F.; Cook, T.; Shipton, A.
2016-10-01
Shortwave infrared (SWIR) cameras are becoming increasingly attractive due to the improving size, resolution and decreasing prices of InGaAs focal plane arrays (FPAs). The rapid development of competitively priced HD performance SWIR cameras has not been matched in SWIR imaging lenses with the result that the lens is now more likely to be the limiting factor in imaging quality than the FPA. Adapting existing lens designs from the visible region by re-coating for SWIR will improve total transmission but diminished image quality metrics such as MTF, and in particular large field angle performance such as vignetting, field curvature and distortion are serious consequences. To meet this challenge original SWIR solutions are presented including a wide field of view fixed focal length lens for commercial machine vision (CMV) and a wide angle, small, lightweight defence lens and their relevant design considerations discussed. Issues restricting suitable glass types will be examined. The index and dispersion properties at SWIR wavelengths can differ significantly from their visible values resulting in unusual glass combinations when matching doublet elements. Materials chosen simultaneously allow athermalization of the design as well as containing matched CTEs in the elements of doublets. Recently, thinned backside-illuminated InGaAs devices have made Vis.SWIR cameras viable. The SWIR band is sufficiently close to the visible that the same constituent materials can be used for AR coatings covering both bands. Keeping the lens short and mass low can easily result in high incidence angles which in turn complicates coating design, especially when extended beyond SWIR into the visible band. This paper also explores the potential performance of wideband Vis.SWIR AR coatings.
Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...
NASA Astrophysics Data System (ADS)
Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert
2017-04-01
Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.
2012-08-20
With the addition of four high-resolution Navigation Camera, or Navcam, images, taken on Aug. 18 Sol 12, Curiosity 360-degree landing-site panorama now includes the highest point on Mount Sharp visible from the rover.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-01-01
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-05-10
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Innovative optronics for the new PUMA tank
NASA Astrophysics Data System (ADS)
Fritze, J.; Münzberg, M.; Schlemmer, H.
2010-04-01
The new PUMA tank is equipped with a fully stabilized 360° periscope. The thermal imager in the periscope is identical to the imager in the gunner sight. All optronic images of the cameras can be fed on every electronic display within the tank. The thermal imagers operate with a long wave 384x288 MCT starring focal plane array. The high quantum efficiency of MCT provides low NETD values at short integration times. The thermal imager has an image resolution of 768x576 pixels by means of a micro scanner. The MCT detector operates at high temperatures above 75K with high stability in noise and correctibility and offers high reliability (MTTF) values for the complete camera in a very compact design. The paper discusses the principle and functionality of the optronic combination of direct view optical channel, thermal imager and visible camera and discusses in detail the performances of the subcomponents with respect to demands for new tank applications.
Martian Surface & Pathfinder Airbags
1997-07-05
This image of the Martian surface was taken in the afternoon of Mars Pathfinder's first day on Mars. Taken by the Imager for Mars Pathfinder (IMP camera), the image shows a diversity of rocks strewn in the foreground. A hill is visible in the distance (the notch within the hill is an image artifact). Airbags are seen at the lower right. http://photojournal.jpl.nasa.gov/catalog/PIA00612
Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi
2017-02-01
Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Hurricane Matthew over Haiti seen by NASA MISR
2016-10-04
On the morning of October 4, 2016, Hurricane Matthew passed over the island nation of Haiti. A Category 4 storm, it made landfall around 7 a.m. local time (5 a.m. PDT/8 a.m. EDT) with sustained winds over 145 mph. This is the strongest hurricane to hit Haiti in over 50 years. On October 4, at 10:30 a.m. local time (8:30 a.m. PDT/11:30 a.m. EDT), the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite passed over Hurricane Matthew. This animation was made from images taken by MISR's downward-pointing (nadir) camera is 235 miles (378 kilometers) across, which is much narrower than the massive diameter of Matthew, so only the hurricane's eye and a portion of the storm's right side are visible. Haiti is completely obscured by Matthew's clouds, but part of the Bahamas is visible to the north. Several hot towers are visible within the central part of the storm, and another at the top right of the image. Hot towers are enormous thunderheads that punch through the tropopause (the boundary between the lowest layer of the atmosphere, the troposphere, and the next level, the stratosphere). The rugged topography of Haiti causes uplift within the storm, generating these hot towers and fueling even more rain than Matthew would otherwise dump on the country. MISR has nine cameras fixed at different angles, which capture images of the same point on the ground within about seven minutes. This animation was created by blending images from these nine cameras. The change in angle between the images causes a much larger motion from south to north than actually exists, but the rotation of the storm is real motion. From this animation, you can get an idea of the incredible height of the hot towers, especially the one to the upper right. The counter-clockwise rotation of Matthew around its closed (cloudy) eye is also visible. These data were acquired during Terra orbit 89345. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21070
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Confocal retinal imaging using a digital light projector with a near infrared VCSEL source
NASA Astrophysics Data System (ADS)
Muller, Matthew S.; Elsner, Ann E.
2018-02-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.
2013-07-17
These craters on Tharsis are first visible as new dark spots observed by NASA Mars Reconnaissance Orbiter Context Camera CTX, which can view much larger areas, and then imaged by HiRISE for a close-up look.
2000-11-21
This image is one of seven from the narrow-angle camera on NASA Cassini spacecraft assembled as a brief movie of cloud movements on Jupiter. The smallest features visible are about 500 kilometers about 300 miles across.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Multispectral THz-VIS passive imaging system for hidden threats visualization
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw
2013-10-01
Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.
HUBBLE FINDS A BARE BLACK HOLE POURING OUT LIGHT
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Hubble Space Telescope has provided a never-before-seen view of a warped disk flooded with a torrent of ultraviolet light from hot gas trapped around a suspected massive black hole. [Right] This composite image of the core of the galaxy was constructed by combining a visible light image taken with Hubble's Wide Field Planetary Camera 2 (WFPC2), with a separate image taken in ultraviolet light with the Faint Object Camera (FOC). While the visible light image shows a dark dust disk, the ultraviolet image (color-coded blue) shows a bright feature along one side of the disk. Because Hubble sees ultraviolet light reflected from only one side of the disk, astronomers conclude the disk must be warped like the brim of a hat. The bright white spot at the image's center is light from the vicinity of the black hole which is illuminating the disk. [Left] A ground-based telescopic view of the core of the elliptical galaxy NGC 6251. The inset box shows Hubble Space Telescope's field of view. The galaxy is 300 million light-years away in the constellation Ursa Minor. Photo Credit: Philippe Crane (European Southern Observatory), and NASA
Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples
Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.
2014-01-01
Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510
Contrast enhancement for in vivo visible reflectance imaging of tissue oxygenation.
Crane, Nicole J; Schultz, Zachary D; Levin, Ira W
2007-08-01
Results are presented illustrating a straightforward algorithm to be used for real-time monitoring of oxygenation levels in blood cells and tissue based on the visible spectrum of hemoglobin. Absorbance images obtained from the visible reflection of white light through separate red and blue bandpass filters recorded by monochrome charge-coupled devices (CCDs) are combined to create enhanced images that suggest a quantitative correlation between the degree of oxygenated and deoxygenated hemoglobin in red blood cells. The filter bandpass regions are chosen specifically to mimic the color response of commercial 3-CCD cameras, representative of detectors with which the operating room laparoscopic tower systems are equipped. Adaptation of this filter approach is demonstrated for laparoscopic donor nephrectomies in which images are analyzed in terms of real-time in vivo monitoring of tissue oxygenation.
High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina
Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng
2010-01-01
A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
2016-11-21
Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508
Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang
1994-01-01
This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.
2005-10-04
During its time in orbit, Cassini has spotted many beautiful cat's eye-shaped patterns like the ones visible here. These patterns occur in places where the winds and the atmospheric density at one latitude are different from those at another latitude. The opposing east-west flowing cloud bands are the dominant patterns seen here and elsewhere in Saturn's atmosphere. Contrast in the image was enhanced to aid the visibility of atmospheric features. The image was taken with the Cassini spacecraft wide-angle camera on Aug. 20, 2005. http://photojournal.jpl.nasa.gov/catalog/PIA07600
Vemmer, T; Steinbüchel, C; Bertram, J; Eschner, W; Kögler, A; Luig, H
1997-03-01
The purpose of this study was to determine whether data acquisition in the list mode and iterative tomographic reconstruction would render feasible cardiac phase-synchronized thallium-201 single-photon emission tomography (SPET) of the myocardium under routine conditions without modifications in tracer dose, acquisition time, or number of steps of the a gamma camera. Seventy non-selected patients underwent 201T1 SPET imaging according to a routine protocol (74 MBq/2 mCi 201T1, 180 degrees rotation of the gamma camera, 32 steps, 30 min). Gamma camera data, ECG, and a time signal were recorded in list mode. The cardiac cycle was divided into eight phases, the end-diastolic phase encompassing the QRS complex, and the end-systolic phase the T wave. Both phase- and non-phase-synchronized tomograms based on the same list mode data were reconstructed iteratively. Phase-synchronized and non-synchronized images were compared. Patients were divided into two groups depending on whether or not coronary artery disease had been definitely diagnosed prior to SPET imaging. The numbers of patients in both groups demonstrating defects visible on the phase-synchronized but not on the non-synchronized images were compared. It was found that both postexercise and redistribution phase tomograms were suited for interpretation. The changes from end-diastolic to end-systolic images allowed a comparative assessment of regional wall motility and tracer uptake. End-diastolic tomograms provided the best definition of defects. Additional defects not apparent on non-synchronized images were visible in 40 patients, six of whom did not show any defect on the non-synchronized images. Of 42 patients in whom coronary artery disease had been definitely diagnosed, 19 had additional defects not visible on the non-synchronized images, in comparison to 21 of 28 in whom coronary artery disease was suspected (P < 0.02; chi 2). It is concluded that cardiac phase-synchronized 201T1 SPET of the myocardium was made feasible by list mode data acquisition and iterative reconstruction. The additional findings on the phase-synchronized tomograms, not visible on the non-synchronized ones, represented genuine defects. Cardiac phase-synchronized 201T1 SPET is advantageous in allowing simultaneous assessment of regional wall motion and tracer uptake, and in visualizing smaller defects.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
NASA Technical Reports Server (NTRS)
2004-01-01
The Mars Exploration Rover Opportunity finished observations of the prominent rock outcrop it has been studying during its 51 martian days, or sols, on Mars, and is currently on the hunt for new discoveries. This image from the rover's navigation camera atop its mast features Opportunity's lander--its temporary home for the six-month cruise to Mars. The rover's soil survey traverse plan involves arcing around its landing site, called the Challenger Memorial Station, and over the trench it made on sol 23. In this image, Opportunity is situated about 6.2 meters (about 20.3 feet) from the lander. Rover tracks zig-zag along the surface. Bounce marks and airbag retraction marks are visible around the lander. The calibration target or sundial, which both rover panoramic cameras use to verify the true colors and brightness of the red planet, is visible on the back end of the rover.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Development of a 3-D visible limiter imaging system for the HSX stellarator
NASA Astrophysics Data System (ADS)
Buelo, C.; Stephey, L.; Anderson, F. S. B.; Eisert, D.; Anderson, D. T.
2017-12-01
A visible camera diagnostic has been developed to study the Helically Symmetric eXperiment (HSX) limiter plasma interaction. A straight line view from the camera location to the limiter was not possible due to the complex 3D stellarator geometry of HSX, so it was necessary to insert a mirror/lens system into the plasma edge. A custom support structure for this optical system tailored to the HSX geometry was designed and installed. This system holds the optics tube assembly at the required angle for the desired view to both minimize system stress and facilitate robust and repeatable camera positioning. The camera system has been absolutely calibrated and using Hα and C-III filters can provide hydrogen and carbon photon fluxes, which through an S/XB coefficient can be converted into particle fluxes. The resulting measurements have been used to obtain the characteristic penetration length of hydrogen and C-III species. The hydrogen λiz value shows reasonable agreement with the value predicted by a 1D penetration length calculation.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Northern California and San Francisco Bay
NASA Technical Reports Server (NTRS)
2000-01-01
The left image of this pair was acquired by MISR's nadir camera on August 17, 2000 during Terra orbit 3545. Toward the top, and nestled between the Coast Range and the Sierra Nevadas, are the green fields of the Sacramento Valley. The city of Sacramento is the grayish area near the right-hand side of the image. Further south, San Francisco and other cities of the Bay Area are visible.On the right is a zoomed-in view of the area outlined by the yellow polygon. It highlights the southern end of San Francisco Bay, and was acquired by MISR's airborne counterpart, AirMISR, during an engineering check-out flight on August 25, 1997. AirMISR flies aboard a NASA ER-2 high-altitude aircraft and contains a single camera that rotates to different view angles. When this image was acquired, the AirMISR camera was pointed 70 degrees forward of the vertical. Colorful tidal flats are visible in both the AirMISR and MISR imagery.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.govCalibration method for video and radiation imagers
Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN
2011-07-05
The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.
Time-of-flight range imaging for underwater applications
NASA Astrophysics Data System (ADS)
Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias
2018-02-01
Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.
A compact multichannel spectrometer for Thomson scatteringa)
NASA Astrophysics Data System (ADS)
Schoenbeck, N. L.; Schlossberg, D. J.; Dowd, A. S.; Fonck, R. J.; Winz, G. R.
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of Te < 100 eV are achieved by a 2971 l/mm VPH grating and measurements Te > 100 eV by a 2072 l/mm VPH grating. The spectrometer uses a fast-gated (˜2 ns) ICCD camera for detection. A Gen III image intensifier provides ˜45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
A compact multichannel spectrometer for Thomson scattering.
Schoenbeck, N L; Schlossberg, D J; Dowd, A S; Fonck, R J; Winz, G R
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of T(e) < 100 eV are achieved by a 2971 l∕mm VPH grating and measurements T(e) > 100 eV by a 2072 l∕mm VPH grating. The spectrometer uses a fast-gated (~2 ns) ICCD camera for detection. A Gen III image intensifier provides ~45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences
NASA Astrophysics Data System (ADS)
Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.
2018-05-01
An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.
2015-10-08
Regions with exposed water ice are highlighted in blue in this composite image from New Horizons' Ralph instrument, combining visible imagery from the Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). The strongest signatures of water ice occur along Virgil Fossa, just west of Elliot crater on the left side of the inset image, and also in Viking Terra near the top of the frame. A major outcrop also occurs in Baré Montes towards the right of the image, along with numerous much smaller outcrops, mostly associated with impact craters and valleys between mountains. The scene is approximately 280 miles (450 kilometers) across. Note that all surface feature names are informal. http://ppj2:8080/catalog/PIA19963
Dual-emissive quantum dots for multispectral intraoperative fluorescence imaging.
Chin, Patrick T K; Buckle, Tessa; Aguirre de Miguel, Arantxa; Meskers, Stefan C J; Janssen, René A J; van Leeuwen, Fijs W B
2010-09-01
Fluorescence molecular imaging is rapidly increasing its popularity in image guided surgery applications. To help develop its full surgical potential it remains a challenge to generate dual-emissive imaging agents that allow for combined visible assessment and sensitive camera based imaging. To this end, we now describe multispectral InP/ZnS quantum dots (QDs) that exhibit a bright visible green/yellow exciton emission combined with a long-lived far red defect emission. The intensity of the latter emission was enhanced by X-ray irradiation and allows for: 1) inverted QD density dependent defect emission intensity, showing improved efficacies at lower QD densities, and 2) detection without direct illumination and interference from autofluorescence. Copyright 2010 Elsevier Ltd. All rights reserved.
High resolution imaging of the Venus night side using a Rockwell 128x128 HgCdTe array
NASA Technical Reports Server (NTRS)
Hodapp, K.-W.; Sinton, W.; Ragent, B.; Allen, D.
1989-01-01
The University of Hawaii operates an infrared camera with a 128x128 HgCdTe detector array on loan from JPL's High Resolution Imaging Spectrometer (HIRIS) project. The characteristics of this camera system are discussed. The infrared camera was used to obtain images of the night side of Venus prior to and after inferior conjunction in 1988. The images confirm Allen and Crawford's (1984) discovery of bright features on the dark hemisphere of Venus visible in the H and K bands. Our images of these features are the best obtained to date. Researchers derive a pseudo rotation period of 6.5 days for these features and 1.74 microns brightness temperatures between 425 K and 480 K. The features are produced by nonuniform absorption in the middle cloud layer (47 to 57 Km altitude) of thermal radiation from the lower Venus atmosphere (20 to 30 Km altitude). A more detailed analysis of the data is in progress.
International Space Station from Space Shuttle Endeavour
NASA Technical Reports Server (NTRS)
2007-01-01
The crew of the Space Shuttle Endeavour took this spectacular image of the International Space Station during the STS118 mission, August 8-21, 2007. The image was acquired by an astronaut through one of the crew cabin windows, looking back over the length of the Shuttle. This oblique (looking at an angle from vertical, rather than straight down towards the Earth) image was acquired almost one hour after late inspection activities had begun. The sensor head of the Orbiter Boom Sensor System is visible at image top left. The entire Space Station is visible at image bottom center, set against the backdrop of the Ionian Sea approximately 330 kilometers below it. Other visible features of the southeastern Mediterranean region include the toe and heel of Italy's 'boot' at image lower left, and the western coastlines of Albania and Greece, which extend across image center. Farther towards the horizon, the Aegean and Black Seas are also visible. Featured astronaut photograph STS118-E-9469 was acquired by the STS-118 crew on August 19, 2007, with a Kodak 760C digital camera using a 28 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science and Analysis Laboratory at Johnson Space Center.
Binocular Multispectral Adaptive Imaging System (BMAIS)
2010-07-26
system for pilots that adaptively integrates shortwave infrared (SWIR), visible, near ‐IR (NIR), off‐head thermal, and computer symbology/imagery into...respective areas. BMAIS is a binocular helmet mounted imaging system that features dual shortwave infrared (SWIR) cameras, embedded image processors and...algorithms and fusion of other sensor sites such as forward looking infrared (FLIR) and other aircraft subsystems. BMAIS is attached to the helmet
Saying Goodbye to 'Bonneville' Crater
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Annotated Image NASA's Mars Exploration Rover Spirit took this panoramic camera image on sol 86 (March 31, 2004) before driving 36 meters (118 feet) on sol 87 toward its future destination, the Columbia Hills. This is probably the last panoramic camera image that Spirit will take from the high rim of 'Bonneville' crater, and provides an excellent view of the ejecta-covered path the rover has journeyed thus far. The lander can be seen toward the upper right of the frame and is approximately 321 meters (1060 feet) away from Spirit's current location. The large hill on the horizon is Grissom Hill. The Colombia Hills, located to the left, are not visible in this image.Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source
Muller, Matthew S.; Elsner, Ann E.
2018-01-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Earth Observations taken by the Expedition 10 crew
2005-01-17
ISS010-E-13680 (17 January 2005) --- The border of Galveston and Brazoria Counties in Texas is visible in this electronic still camera's image, as photographed by the Expedition 10 crew onboard the International Space Station. Polly Ranch, near Friendswood, is visible west of Interstate Highway 45 (right side). FM528 goes horizontally through the middle, and FM518 runs vertically through frame center, with the two roads intersecting near Friendswood.
Crane, Nicole J; Gillern, Suzanne M; Tajkarimi, Kambiz; Levin, Ira W; Pinto, Peter A; Elster, Eric A
2010-10-01
We report the novel use of 3-charge coupled device camera technology to infer tissue oxygenation. The technique can aid surgeons to reliably differentiate vascular structures and noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. We analyzed select digital video images from 10 laparoscopic partial nephrectomies for their individual 3-charge coupled device response. We enhanced surgical images by subtracting the red charge coupled device response from the blue response and overlaying the calculated image on the original image. Mean intensity values for regions of interest were compared and used to differentiate arterial and venous vasculature, and ischemic and nonischemic renal parenchyma. The 3-charge coupled device enhanced images clearly delineated the vessels in all cases. Arteries were indicated by an intense red color while veins were shown in blue. Differences in mean region of interest intensity values for arteries and veins were statistically significant (p >0.0001). Three-charge coupled device analysis of pre-clamp and post-clamp renal images revealed visible, dramatic color enhancement for ischemic vs nonischemic kidneys. Differences in the mean region of interest intensity values were also significant (p <0.05). We present a simple use of conventional 3-charge coupled device camera technology in a way that may provide urological surgeons with the ability to reliably distinguish vascular structures during hilar dissection, and detect and monitor changes in renal tissue perfusion during and after warm ischemia. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
HandSight: Supporting Everyday Activities through Touch-Vision
2015-10-01
switches between IR and RGB o Large, low resolution, and fixed focal length > 1ft • Raspberry PI NoIR: https://www.raspberrypi.org/products/ pi -noir...camera/ o Raspberry Pi NoIR camera with external visible light filters o Good image quality, manually adjustable focal length, small, programmable 11...purpose and scope of the research. 2. KEYWORDS: Provide a brief list of keywords (limit to 20 words). 3. ACCOMPLISHMENTS: The PI is reminded that
Kidd, David G; Brethwaite, Andrew
2014-05-01
This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.
New Views of a Familiar Beauty
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 2Figure 3Figure 4Figure 5 This image composite compares the well-known visible-light picture of the glowing Trifid Nebula (left panel) with infrared views from NASA's Spitzer Space Telescope (remaining three panels). The Trifid Nebula is a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius. The false-color Spitzer images reveal a different side of the Trifid Nebula. Where dark lanes of dust are visible trisecting the nebula in the visible-light picture, bright regions of star-forming activity are seen in the Spitzer pictures. All together, Spitzer uncovered 30 massive embryonic stars and 120 smaller newborn stars throughout the Trifid Nebula, in both its dark lanes and luminous clouds. These stars are visible in all the Spitzer images, mainly as yellow or red spots. Embryonic stars are developing stars about to burst into existence. Ten of the 30 massive embryos discovered by Spitzer were found in four dark cores, or stellar 'incubators,' where stars are born. Astronomers using data from the Institute of Radioastronomy millimeter telescope in Spain had previously identified these cores but thought they were not quite ripe for stars. Spitzer's highly sensitive infrared eyes were able to penetrate all four cores to reveal rapidly growing embryos. Astronomers can actually count the individual embryos tucked inside the cores by looking closely at the Spitzer image taken by its infrared array camera (figure 4). This instrument has the highest spatial resolution of Spitzer's imaging cameras. The Spitzer image from the multiband imaging photometer (figure 5), on the other hand, specializes in detecting cooler materials. Its view highlights the relatively cool core material falling onto the Trifid's growing embryos. The middle panel is a combination of Spitzer data from both of these instruments. The embryos are thought to have been triggered by a massive 'type O' star, which can be seen as a white spot at the center of the nebula in all four images. Type O stars are the most massive stars, ending their brief lives in explosive supernovas. The small newborn stars probably arose at the same time as the O star, and from the same original cloud of gas and dust. The Spitzer infrared array camera image is a three-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 and 8.0 microns (red). The Spitzer multiband imaging photometer image (figure 3) shows 24-micron emissions. The Spitzer mosaic image combines data from these pictures, showing light of 4.5 microns (blue), 8.0 microns (green) and 24 microns (red). The visible-light image (figure 2) is from the National Optical Astronomy Observatory, Tucson, Ariz.NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar
Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.
2004-01-01
We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.
Barter, James D; Thompson, Harold R; Richardson, Christine L
2003-03-20
A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.
Cytology 3D structure formation based on optical microscopy images
NASA Astrophysics Data System (ADS)
Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.
2017-01-01
The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.
NASA Technical Reports Server (NTRS)
2005-01-01
During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles). This image is a narrow angle clear-filter image which was processed to enhance the contrast in brightness and sharpness of visible features. Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of this image. This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .2004-09-07
Lonely Mimas swings around Saturn, seeming to gaze down at the planet's splendid rings. The outermost, narrow F ring is visible here and exhibits some clumpy structure near the bottom of the frame. The shadow of Saturn's southern hemisphere stretches almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. The image was taken with the Cassini spacecraft narrow angle camera on August 15, 2004, at a distance of 8.8 million kilometers (5.5 million miles) from Saturn, through a filter sensitive to visible red light. The image scale is 53 kilometers (33 miles) per pixel. Contrast was slightly enhanced to aid visibility.almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. http://photojournal.jpl.nasa.gov/catalog/PIA06471
Candidate cave entrances on Mars
Cushing, Glen E.
2012-01-01
This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.
Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.
Brauers, Johannes; Aach, Til
2011-02-01
High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.
Looking at Art in the IR and UV
NASA Astrophysics Data System (ADS)
Falco, Charles
2013-03-01
Starting with the very earliest cave paintings art has been created to be viewed by the unaided eye and, until very recently, it wasn't even possible to see it at wavelengths outside the visible spectrum. However, it is now possible to view paintings, sculptures, manuscripts, and other cultural artifacts at wavelengths from the x-ray, through the ultraviolet (UV), to well into the infrared (IR). Further, thanks to recent advances in technology, this is becoming possible with hand-held instruments that can be used in locations that were previously inaccessible to anything but laboratory-scale image capture equipment. But, what can be learned from such ``non-visible'' images? In this talk I will briefly describe the characteristics of high resolution UV and IR imaging systems I developed for this purpose by modifying high resolution digital cameras. The sensitivity of the IR camera makes it possible to obtain images of art ``in situ'' with standard museum lighting, resolving features finer than 0.35 mm on a 1.0x0.67 m painting. I also have used both it and the UV camera in remote locations with battery-powered illumination sources. I will illustrate their capabilities with images of various examples of Western, Asian, and Islamic art in museums on three continents, describing how these images have revealed important new information about the working practices of artists as famous as Jan van Eyck. I also will describe what will be possible for this type of work with new capabilities that could be developed within the next few years. This work is based on a collaboration with David Hockney, and benefitted from image analys research supported by ARO grant W911NF-06-1-0359-P00001.
Key, Douglas J
2014-07-01
This study incorporates concurrent thermal camera imaging as a means of both safely extending the length of each treatment session within skin surface temperature tolerances and to demonstrate not only the homogeneous nature of skin surface temperature heating but the distribution of that heating pattern as a reflection of localization of subcutaneous fat distribution. Five subjects were selected because of a desire to reduce abdomen and flank fullness. Full treatment field thermal camera imaging was captured at 15 minute intervals, specifically at 15, 30, and 45 minutes into active treatment with the purpose of monitoring skin temperature and avoiding any patterns of skin temperature excess. Peak areas of heating corresponded anatomically to the patients' areas of greatest fat excess ie, visible "pinchable" fat. Preliminary observation of high-resolution thermal camera imaging used concurrently with focused field RF therapy show peak skin heating patterns overlying the areas of greatest fat excess.
The Use of Gamma-Ray Imaging to Improve Portal Monitor Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziock, Klaus-Peter; Collins, Jeff; Fabris, Lorenzo
2008-01-01
We have constructed a prototype, rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. Our Roadside Tracker uses automated target acquisition and tracking (TAT) software to identify and track vehicles in visible light images. The field of view of the visible camera overlaps with and is calibrated to that of a one-dimensional gamma-ray imager. The TAT code passes information on when vehicles enter and exit the system field of view and when they cross gamma-ray pixel boundaries. Based on this in-formation, the gamma-ray imager "harvests"more » the gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. In this fashion we are able to generate vehicle-specific radiation signatures and avoid source confusion problems that plague nonimaging approaches to the same problem.« less
2004-04-02
As Cassini closes in on Saturn, its view is growing sharper with time and now reveals new atmospheric features in the planet's southern hemisphere. Atmospheric features, such as two small, faint dark spots, visible in the planet's southern hemisphere, will become clearer in the coming months. The spots are located at 38 degrees south latitude. The spacecraft's narrow angle camera took several exposures on March 8, 2004, which have been combined to create this natural color image. The image contrast and colors have been slightly enhanced to aid visibility. Moons visible in the lower half of this image are: Mimas (398 kilometers, or 247 miles across) at left, just below the rings; Dione (1,118 kilometers, or 695 miles across) at left, below Mimas; and Enceladus (499 kilometers, 310 miles across) at right. The moons had their brightness enhanced to aid visibility. The spacecraft was then 56.4 million kilometers (35 million miles) from Saturn, or slightly more than one-third of the distance from Earth to the Sun. The image scale is approximately 338 kilometers (210 miles) per pixel. The planet is 23 percent larger in this image than it appeared in the preceding color image, taken four weeks earlier. http://photojournal.jpl.nasa.gov/catalog/PIA05385
NASA Technical Reports Server (NTRS)
2005-01-01
Saturn poses with Tethys in this Cassini view. The C ring casts thin, string-like shadows on the northern hemisphere. Above that lurks the shadow of the much denser B ring. Cloud bands in the atmosphere are subtly visible in the south. Tethys is 1,071 kilometers (665 miles) across. Cassini will perform a close flyby of Tethys on September 24, 2005. The image was taken on June 10, 2005, in visible green light with the Cassini spacecraft wide-angle camera at a distance of approximately 1.4 million kilometers (900,000 miles) from Saturn. The image scale is 81 kilometers (50 miles) per pixel. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .Neptune Through a Clear Filter
1999-07-25
On July 23, 1989, NASA Voyager 2 spacecraft took this picture of Neptune through a clear filter on its narrow-angle camera. The image on the right has a latitude and longitude grid added for reference. Neptune Great Dark Spot is visible on the left.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
2004-03-12
Scientists have only a rough idea of the lifetime of clumps in Saturn's rings - a mystery that Cassini may help answer. The latest images taken by the Cassini-Huygens spacecraft show clumps seemingly embedded within Saturn's narrow, outermost F ring. The narrow angle camera took the images on Feb. 23, 2004, from a distance of 62.9 million kilometers (39 million miles). The two images taken nearly two hours apart show these clumps as they revolve about the planet. The small dot at center right in the second image is one of Saturn's small moons, Janus, which is 181 kilometers, (112 miles) across. Like all particles in Saturn's ring system, these clump features orbit the planet in the same direction in which the planet rotates. This direction is clockwise as seen from Cassini's southern vantage point below the ring plane. Two clumps in particular, one of them extended, is visible in the upper part of the F ring in the image on the left, and in the lower part of the ring in the image on the right. Other knot-like irregularities in the ring's brightness are visible in the image on the right. The core of the F ring is about 50 kilometers (31miles) wide, and from Cassini's current distance, is not fully visible. The imaging team enhanced the contrast of the images and magnified them to aid visibility of the F ring and the clump features. The camera took the images with the green filter, which is centered at 568 nanometers. The image scale is 377 kilometers (234 miles) per pixel. NASA's two Voyager spacecraft that flew past Saturn in 1980 and 1981 were the first to see these clumps. The Voyager data suggest that the clumps change very little and can be tracked as they orbit for 30 days or more. No clump survived from the time of the first Voyager flyby to the Voyager 2 flyby nine months later. Scientists are not certain of the cause of these features. Among the theories proposed are meteoroid bombardments and inter-particle collisions in the F ring. http://photojournal.jpl.nasa.gov/catalog/PIA05382
Sellers and Fossum on the end of the OBSS during EVA1 on STS-121 / Expedition 13 joint operations
2006-07-08
STS121-323-011 (8 July 2006) --- Astronauts Piers J. Sellers and Michael E. Fossum, STS-121 mission specialists, work in tandem on Space Shuttle Discovery's Remote Manipulator System/Orbiter Boom Sensor System (RMS/OBSS) during the mission's first scheduled session of extravehicular activity (EVA). Also visible on the OBSS are the Laser Dynamic Range Imager (LDRI), Intensified Television Camera (ITVC) and Laser Camera System (LCS).
Enhanced Early View of Ceres from Dawn
2014-12-05
As the Dawn spacecraft flies through space toward the dwarf planet Ceres, the unexplored world appears to its camera as a bright light in the distance, full of possibility for scientific discovery. This view was acquired as part of a final calibration of the science camera before Dawn's arrival at Ceres. To accomplish this, the camera needed to take pictures of a target that appears just a few pixels across. On Dec. 1, 2014, Ceres was about nine pixels in diameter, nearly perfect for this calibration. The images provide data on very subtle optical properties of the camera that scientists will use when they analyze and interpret the details of some of the pictures returned from orbit. Ceres is the bright spot in the center of the image. Because the dwarf planet is much brighter than the stars in the background, the camera team selected a long exposure time to make the stars visible. The long exposure made Ceres appear overexposed, and exaggerated its size; this was corrected by superimposing a shorter exposure of the dwarf planet in the center of the image. A cropped, magnified view of Ceres appears in the inset image at lower left. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19050
The advanced linked extended reconnaissance and targeting technology demonstration project
NASA Astrophysics Data System (ADS)
Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle
2007-06-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
Garcia, Jair E; Greentree, Andrew D; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G
2014-01-01
The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels.
UTOFIA: an underwater time-of-flight image acquisition system
NASA Astrophysics Data System (ADS)
Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris
2017-10-01
In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
X-ray ‘ghost images’ could cut radiation doses
NASA Astrophysics Data System (ADS)
Chen, Sophia
2018-03-01
On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
2015-10-15
NASA's Cassini spacecraft spied this tight trio of craters as it approached Saturn's icy moon Enceladus for a close flyby on Oct. 14, 2015. The craters, located at high northern latitudes, are sliced through by thin fractures -- part of a network of similar cracks that wrap around the snow-white moon. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 at a distance of approximately 6,000 miles (10,000 kilometers) from Enceladus. Image scale is 197 feet (60 meters) per pixel. The image was taken with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 using a spectral filter which preferentially admits wavelengths of ultraviolet light centered at 338 nanometers. http://photojournal.jpl.nasa.gov/catalog/PIA20011
HALO: a reconfigurable image enhancement and multisensor fusion system
NASA Astrophysics Data System (ADS)
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
Implementation of Nearest Neighbor using HSV to Identify Skin Disease
NASA Astrophysics Data System (ADS)
Gerhana, Y. A.; Zulfikar, W. B.; Ramdani, A. H.; Ramdhani, M. A.
2018-01-01
Today, Android is one of the most widely used operating system in the world. Most of android device has a camera that could capture an image, this feature could be optimized to identify skin disease. The disease is one of health problem caused by bacterium, fungi, and virus. The symptoms of skin disease usually visible. In this work, the symptoms that captured as image contains HSV in every pixel of the image. HSV can extracted and then calculate to earn euclidean value. The value compared using nearest neighbor algorithm to discover closer value between image testing and image training to get highest value that decide class label or type of skin disease. The testing result show that 166 of 200 or about 80% is accurate. There are some reasons that influence the result of classification model like number of image training and quality of android device’s camera.
Floodwaters Renew Zambia's Kafue Wetland
NASA Technical Reports Server (NTRS)
2004-01-01
Not all floods are unwanted. Heavy rainfall in southern Africa between December 2003 and April 2004 provided central Zambia with floodwaters needed to support the diverse uses of water within the Kafue Flats area. The Kafue Flats are home to about one million people and provide a rich inland fishery, habitat for an array of unique wildlife, and the means for hydroelectricity production. The Flats falls between two dams: Upstream to the west (not visible here) is the Izhi-tezhi, and downstream (middle right of the images) is the Kafue Gorge dam. Since the construction of these dams, the flooded area has been reduced and the timing and intensity of the inundation has changed. During June 2004 an agreement was made with the hydroelectricity company to restore water releases from the dams according to a more natural flooding regime. These images from NASA's Multi-angle Imaging SpectroRadiometer (MISR) illustrate surface changes to the wetlands and other surfaces in central Zambia resulting from an unusually lengthy wet season. The Kafue Flats appear relatively dry on July 19, 2003 (upper images), with the Kafue River visible as a slender dark line that snakes from east to west on its way to join the Zambezi (visible in the lower right-hand corner). On July 21, 2004 (lower images), well into the dry season, much of the 6,500-square kilometer area of the Kafue Flats remains inundated. To the east of the Kafue Flats is Lusaka, the Zambian capital, visible as a pale area in the middle right of the picture, north of the river. In the upper portions of these images is the prominent roundish shape of the Lukanga Swamp, another important wetland.
The images along the left are natural-color views from MISR's nadir camera, and the images along the right are angular composites in which red band data from MISR's 46o forward, nadir, and 46o backward viewing cameras is displayed as red, green and blue, respectively. In order to preserve brightness variations among the various cameras, the data from each camera were processed identically. Here, color changes indicate surface texture, and are influenced by terrain, vegetation structure, soil type and soil moisture content. Wet surfaces or areas with standing water appear blue in this display because sun glitter makes smooth, wet surfaces look brighter at the backward camera's view angle. Mostly the landscape appears somewhat purple, indicating that most of the surfaces scatter sunlight in both backward and forward directions. Areas that appear with a slight greenish hue can indicate sparce vegetation, since the nadir camera is more likely to sight the gaps between the trees or shrubs, and since vegetation is darker (in the red band) than the underlying soil surface. Areas which preferentially exhibit a red or pink hue correspond with wetland vegetation. The plateau of the Kafue National Park, to the west of Lukanga Swamp, appears brighter in 2004 compared with 2003, which indicates weaker absorption at the red band. Overall, the 2004 image exhibits a subtle blue hue (preference for forward-scattering) compared with 2003, which indicates overall surface changes that may be a result of enhanced surface wetness. The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 19072 and 24421. The panels cover an area of 235 kilometers x 239 kilometers, and utilize data from blocks 100 to 103 within World Reference System-2 path 172. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Fifty Years of Mars Imaging: from Mariner 4 to HiRISE
2017-11-20
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115
Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija
2017-01-01
After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY.
Cukierski, William J; Qi, Xin; Foran, David J
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral "cube" is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l'éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears.
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
1986-01-24
Range : 236,000 km. ( 147,000 mi. ) Resolution : 33 km. ( 20 mi. ) P-29525B/W This Voyager 2 image reveals a contiuos distribution of small particles throughout the Uranus ring system. This unigue geometry, the highest phase angle at which Voyager imaged the rings, allows us to see lanes of fine dust particles not visible from other viewing angles. All the previously known rings are visible. However, some of the brightest features in the image are bright dust lanes not previously seen. the combination of this unique geometry and a long, 96 second exposure allowed this spectacular observation, acquired through the clear filter if Voyager 2's wide angle camera. the long exposure produced a noticable, non-uniform smear, as well as streaks due to trailed stars.
Reconstructed images of 4 Vesta.
NASA Astrophysics Data System (ADS)
Drummond, J.; Eckart, A.; Hege, E. K.
The first glimpses of an asteroid's surface have been obtained from images of 4 Vesta reconstructed from speckle interferometric observations made with Harvard's PAPA camera coupled to Steward Observatory's 2.3 m telescope. Vesta is found to have a "normal" triaxial ellipsoid shape of 566(±15)×532(±15)×466(±15) km. Its rotational pole lies within 4° of ecliptic long. 327°, lat. = +55°. Reconstructed images obtained with the power spectra and Knox-Thompson cross-spectra reveal dark and bright patterns, reminiscent of the Moon. Three bright and three dark areas are visible, and when combined with an inferred seventh bright region not visible during the rotational phases covered during the authors' run, lead to lightcurves that match Vesta's lightcurve history.
Stunning Image of Rosetta above Mars taken by the Philae Lander Camera
2007-02-05
Stunning image taken by the CIVA imaging instrument on Rosetta Philae lander just 4 minutes before closest approach at a distance of some 1000 km from Mars on Feb. 25, 2007. A portion of the spacecraft and one of its solar arrays are visible in nice detail. Beneath, the Mawrth Vallis region is visible on the planet's disk. Mawrth Vallis is particularly relevant as it is one of the areas on the Martian surface where the OMEGA instrument on board ESA's Mars Express detected the presence of hydrated clay minerals -- a sign that water may have flown abundantly on that region in the very early history of Mars. Id 217487 http://photojournal.jpl.nasa.gov/catalog/PIA18154
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
Detection of unmanned aerial vehicles using a visible camera system.
Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C
2017-01-20
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
Opto-mechanical system design of test system for near-infrared and visible target
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Zhu, Guodong; Wang, Yuchao
2014-12-01
Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.
Development of high energy micro-tomography system at SPring-8
NASA Astrophysics Data System (ADS)
Uesugi, Kentaro; Hoshino, Masato
2017-09-01
A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.
2016-10-18
perspective view of Charon's informally named "Serenity Chasm" consists of topography generated from stereo reconstruction of images taken by New Horizons' Long Range Reconnaissance Imager (LORRI) and Multispectral Visible Imaging Camera (MVIC), supplemented by a "shape-from-shading" algorithm. The topography is then overlain with the PIA21128 image mosaic and the perspective view is rendered. The MVIC image was taken from a distance of 45,458 miles (73,159 kilometers) while the LORRI picture was taken from 19,511 miles (31,401 kilometers) away, both on July 14, 2015. http://photojournal.jpl.nasa.gov/catalog/PIA21129
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
Hubble Provides Infrared View of Jupiter's Moon, Ring, and Clouds
NASA Technical Reports Server (NTRS)
1997-01-01
Probing Jupiter's atmosphere for the first time, the Hubble Space Telescope's new Near Infrared Camera and Multi-Object Spectrometer (NICMOS) provides a sharp glimpse of the planet's ring, moon, and high-altitude clouds.
The presence of methane in Jupiter's hydrogen- and helium-rich atmosphere has allowed NICMOS to plumb Jupiter's atmosphere, revealing bands of high-altitude clouds. Visible light observations cannot provide a clear view of these high clouds because the underlying clouds reflect so much visible light that the higher level clouds are indistinguishable from the lower layer. The methane gas between the main cloud deck and the high clouds absorbs the reflected infrared light, allowing those clouds that are above most of the atmosphere to appear bright. Scientists will use NICMOS to study the high altitude portion of Jupiter's atmosphere to study clouds at lower levels. They will then analyze those images along with visible light information to compile a clearer picture of the planet's weather. Clouds at different levels tell unique stories. On Earth, for example, ice crystal (cirrus) clouds are found at high altitudes while water (cumulus) clouds are at lower levels.Besides showing details of the planet's high-altitude clouds, NICMOS also provides a clear view of the ring and the moon, Metis. Jupiter's ring plane, seen nearly edge-on, is visible as a faint line on the upper right portion of the NICMOS image. Metis can be seen in the ring plane (the bright circle on the ring's outer edge). The moon is 25 miles wide and about 80,000 miles from Jupiter.Because of the near-infrared camera's narrow field of view, this image is a mosaic constructed from three individual images taken Sept. 17, 1997. The color intensity was adjusted to accentuate the high-altitude clouds. The dark circle on the disk of Jupiter (center of image) is an artifact of the imaging system.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/Referenceless perceptual fog density prediction model
NASA Astrophysics Data System (ADS)
Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.
2014-02-01
We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.
Simulation of laser beam reflection at the sea surface modeling and validation
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2013-06-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for the pre-calculation of images for cameras operating in different spectral wavebands (visible, short wave infrared) for a bistatic configuration of laser source and receiver for different atmospheric conditions. In the visible waveband the calculated detected total power of reflected laser light from a 660nm laser source is compared with data collected in a field trial. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser beam reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the view of a camera the sea surface radiance must be calculated for the specific waveband. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). Validation of simulation results is prerequisite before applying the computer simulation to maritime laser applications. For validation purposes data (images and meteorological data) were selected from field measurements, using a 660nm cw-laser diode to produce laser beam reflection at the water surface and recording images by a TV camera. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam.
Reasoning About Visibility in Mirrors: A Comparison Between a Human Observer and a Camera.
Bertamini, Marco; Soranzo, Alessandro
2018-01-01
Human observers make errors when predicting what is visible in a mirror. This is true for perception with real mirrors as well as for reasoning about mirrors shown in diagrams. We created an illustration of a room, a top-down view, with a mirror on a wall and objects (nails) on the opposite wall. The task was to select which nails were visible in the mirror from a given position (viewpoint). To study the importance of the social nature of the viewpoint, we divided the sample ( N = 108) in two groups. One group ( n = 54) were tested with a scene in which there was the image of a person. The other group ( n = 54) were tested with the same scene but with a camera replacing the person. Participants were instructed to think about what would be captured by a camera on a tripod. This manipulation tests the effect of social perspective-taking in reasoning about mirrors. As predicted, performance on the task shows an overestimation of what can be seen in a mirror and a bias to underestimate the role of the different viewpoints, that is, a tendency to treat the mirror as if it captures information independently of viewpoint. In terms of the comparison between person and camera, there were more errors for the camera, suggesting an advantage for evaluating a human viewpoint as opposed to an artificial viewpoint. We suggest that social mechanisms may be involved in perspective-taking in reasoning rather than in automatic attention allocation.
Performance Analysis of Visible Light Communication Using CMOS Sensors.
Do, Trong-Hop; Yoo, Myungsik
2016-02-29
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.
Performance Analysis of Visible Light Communication Using CMOS Sensors
Do, Trong-Hop; Yoo, Myungsik
2016-01-01
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535
ISS, Soyuz, and Endeavour undocking seen from the SM during Expedition Four
2001-12-15
ISS004-E-5024 (15 December 2001) --- A Soyuz vehicle, docked to the International Space Station (ISS), is photographed by a crewmember on the station. A portion of the Space Shuttle Endeavour is visible in the background. The image was taken with a digital still camera.
Onufrienko with fresh fruit in the Zvezda SM, Expedition Four
2002-01-16
ISS004-E-6334 (January 2002) --- Cosmonaut Yury I. Onufrienko, Expedition Four mission commander representing Rosaviakosmos, is photographed in the Zvezda Service Module on the International Space Station (ISS). Apples and oranges are visible floating freely in front of Onufrienko. The image was taken with a digital still camera.
Usachev is visible in the open ODS hatch
2001-08-12
STS105-E-5094 (12 August 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, can be seen through the recently opened airlock hatch of Space Shuttle Discovery as he welcomes the STS-105 and Expedition Three crews. This image was taken with a digital still camera.
1998-10-30
This picture of Neptune was produced from the last whole planet images taken through the green and orange filters on NASA's Voyager 2 narrow angle camera. The images were taken at a range of 4.4 million miles from the planet, 4 days and 20 hours before closest approach. The picture shows the Great Dark Spot and its companion bright smudge; on the west limb the fast moving bright feature called Scooter and the little dark spot are visible. These clouds were seen to persist for as long as Voyager's cameras could resolve them. North of these, a bright cloud band similar to the south polar streak may be seen. http://photojournal.jpl.nasa.gov/catalog/PIA01492
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System
NASA Astrophysics Data System (ADS)
Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.
2017-12-01
The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.
Techniques for identifying dust devils in mars pathfinder images
Metzger, S.M.; Carr, J.R.; Johnson, J. R.; Parker, T.J.; Lemmon, M.T.
2000-01-01
Image processing methods used to identify and enhance dust devil features imaged by IMP (Imager for Mars Pathfinder) are reviewed. Spectral differences, visible red minus visible blue, were used for initial dust devil searches, driven by the observation that Martian dust has high red and low blue reflectance. The Martian sky proved to be more heavily dust-laden than pre-Pathfinder predictions, based on analysis of images from the Hubble Space Telescope. As a result, these initial spectral difference methods failed to contrast dust devils with background dust haze. Imager artifacts (dust motes on the camera lens, flat-field effects caused by imperfections in the CCD, and projection onto a flat sensor plane by a convex lens) further impeded the ability to resolve subtle dust devil features. Consequently, reference images containing sky with a minimal horizon were first subtracted from each spectral filter image to remove camera artifacts and reduce the background dust haze signal. Once the sky-flat preprocessing step was completed, the red-minus-blue spectral difference scheme was attempted again. Dust devils then were successfully identified as bright plumes. False-color ratios using calibrated IMP images were found useful for visualizing dust plumes, verifying initial discoveries as vortex-like features. Enhancement of monochromatic (especially blue filter) images revealed dust devils as silhouettes against brighter background sky. Experiments with principal components transformation identified dust devils in raw, uncalibrated IMP images and further showed relative movement of dust devils across the Martian surface. A variety of methods therefore served qualitative and quantitative goals for dust plume identification and analysis in an environment where such features are obscure.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
NASA's AVIRIS Instrument Sheds New Light on Southern California Wildfires
2017-12-08
NASA's Airborne Visible Infrared Imaging Spectrometer instrument (AVIRIS), flying aboard a NASA Armstrong Flight Research Center high-altitude ER-2 aircraft, flew over the wildfires burning in Southern California on Dec. 5, 2017 and acquired this false-color image. Active fires are visible in red, ground surfaces are in green and smoke is in blue. AVIRIS is an imaging spectrometer that observes light in visible and infrared wavelengths, measuring the full spectrum of radiated energy. Unlike regular cameras with three colors, AVIRIS has 224 spectral channels from the visible through the shortwave infrared. This permits mapping of fire temperatures, fractional coverage, and surface properties, including how much fuel is available for a fire. Spectroscopy is also valuable for characterizing forest drought conditions and health to assess fire risk. AVIRIS has been observing fire-prone areas in Southern California for many years, forming a growing time series of before/after data cubes. These data are helping improve scientific understanding of fire risk and how ecosystems respond to drought and fire. https://photojournal.jpl.nasa.gov/catalog/PIA11243
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision
NASA Astrophysics Data System (ADS)
Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.
1995-06-01
Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.
Mars Odyssey Observes Martian Moons
2018-02-22
Phobos and Deimos, the moons of Mars, are seen by the Mars Odyssey orbiter's Thermal Emission Imaging System, or THEMIS, camera. The images were taken in visible-wavelength light. THEMIS also recorded thermal-infrared imagery in the same scan. The apparent motion is due to progression of the camera's pointing during the 17-second span of the February 15, 2018, observation, not from motion of the two moons. This was the second observation of Phobos by Mars Odyssey; the first was on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. The distance to Phobos from Odyssey during the observation was about 3,489 miles (5,615 kilometers). The distance to Deimos from Odyssey during the observation was about 12,222 miles (19,670 kilometers). An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22248
The Hubble Space Telescope: UV, Visible, and Near-Infrared Pursuits
NASA Technical Reports Server (NTRS)
Wiseman, Jennifer
2010-01-01
The Hubble Space Telescope continues to push the limits on world-class astrophysics. Cameras including the Advanced Camera for Surveys and the new panchromatic Wide Field Camera 3 which was installed nu last year's successful servicing mission S2N4,o{fer imaging from near-infrared through ultraviolet wavelengths. Spectroscopic studies of sources from black holes to exoplanet atmospheres are making great advances through the versatile use of STIS, the Space Telescope Imaging Spectrograph. The new Cosmic Origins Spectrograph, also installed last year, is the most sensitive UV spectrograph to fly io space and is uniquely suited to address particular scientific questions on galaxy halos, the intergalactic medium, and the cosmic web. With these outstanding capabilities on HST come complex needs for laboratory astrophysics support including atomic and line identification data. I will provide an overview of Hubble's current capabilities and the scientific programs and goals that particularly benefit from the studies of laboratory astrophysics.
2015-08-10
Bursts of pink and red, dark lanes of mottled cosmic dust, and a bright scattering of stars — this NASA/ESA Hubble Space Telescope image shows part of a messy barred spiral galaxy known as NGC 428. It lies approximately 48 million light-years away from Earth in the constellation of Cetus (The Sea Monster). Although a spiral shape is still just about visible in this close-up shot, overall NGC 428’s spiral structure appears to be quite distorted and warped, thought to be a result of a collision between two galaxies. There also appears to be a substantial amount of star formation occurring within NGC 428 — another telltale sign of a merger. When galaxies collide their clouds of gas can merge, creating intense shocks and hot pockets of gas and often triggering new waves of star formation. NGC 428 was discovered by William Herschel in December 1786. More recently a type Ia supernova designated SN2013ct was discovered within the galaxy by Stuart Parker of the BOSS (Backyard Observatory Supernova Search) project in Australia and New Zealand, although it is unfortunately not visible in this image. This image was captured by Hubble’s Advanced Camera for Surveys (ACS) and Wide Field and Planetary Camera 2 (WFPC2). A version of this image was entered into the Hubble’s Hidden Treasures Image Processing competition by contestants Nick Rose and the Flickr user penninecloud. Links: Nick Rose’s image on Flickr Penninecloud’s image on Flickr
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Multi-Wavelength Views of Messier 81
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Click on individual images below for larger view [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] The magnificent spiral arms of the nearby galaxy Messier 81 are highlighted in this image from NASA's Spitzer Space Telescope. Located in the northern constellation of Ursa Major (which also includes the Big Dipper), this galaxy is easily visible through binoculars or a small telescope. M81 is located at a distance of 12 million light-years.The main image is a composite mosaic obtained with the multiband imaging photometer for Spitzer and the infrared array camera. Thermal infrared emission at 24 microns detected by the photometer (red, bottom left inset) is combined with camera data at 8.0 microns (green, bottom center inset) and 3.6 microns (blue, bottom right inset).A visible-light image of Messier 81, obtained at Kitt Peak National Observatory, a ground-based telescope, is shown in the upper right inset. Both the visible-light picture and the 3.6-micron near-infrared image trace the distribution of stars, although the Spitzer image is virtually unaffected by obscuring dust. Both images reveal a very smooth stellar mass distribution, with the spiral arms relatively subdued.As one moves to longer wavelengths, the spiral arms become the dominant feature of the galaxy. The 8-micron emission is dominated by infrared light radiated by hot dust that has been heated by nearby luminous stars. Dust in the galaxy is bathed by ultraviolet and visible light from nearby stars. Upon absorbing an ultraviolet or visible-light photon, a dust grain is heated and re-emits the energy at longer infrared wavelengths. The dust particles are composed of silicates (chemically similar to beach sand), carbonaceous grains and polycyclic aromatic hydrocarbons and trace the gas distribution in the galaxy. The well-mixed gas (which is best detected at radio wavelengths) and dust provide a reservoir of raw materials for future star formation.The 24-micron multiband imaging photometer image shows emission from warm dust heated by the most luminous young stars. The infrared-bright clumpy knots within the spiral arms show where massive stars are being born in giant H II (ionized hydrogen) regions. Studying the locations of these star forming regions with respect to the overall mass distribution and other constituents of the galaxy (e.g., gas) will help identify the conditions and processes needed for star formation.D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Pluto Moon Nix, Half Illuminated
2015-12-18
This recently received panchromatic image of Pluto's small satellite Nix taken by the Multispectral Visible Imaging Camera (MVIC) aboard New Horizons is one of the best images of Pluto's third-largest moon generated by the NASA mission. Taken on July 14, 2015, at a range of about 14,000 miles (23,000 kilometers) from Nix, the illuminated surface is about 12 miles (19 kilometers) by 29 miles (47 kilometers). The unique perspective of this image provides new details about Nix's geologic history and impact record. http://photojournal.jpl.nasa.gov/catalog/PIA20287
2016-10-18
Pluto's present, hazy atmosphere is almost entirely free of clouds, though scientists from NASA's New Horizons mission have identified some cloud candidates after examining images taken by the New Horizons Long Range Reconnaissance Imager and Multispectral Visible Imaging Camera, during the spacecraft's July 2015 flight through the Pluto system. All are low-lying, isolated small features -- no broad cloud decks or fields -- and while none of the features can be confirmed with stereo imaging, scientists say they are suggestive of possible, rare condensation clouds. http://photojournal.jpl.nasa.gov/catalog/PIA21127
The Two Moons of Mars As Seen from 'Husband Hill'
NASA Technical Reports Server (NTRS)
2005-01-01
Taking advantage of extra solar energy collected during the day, NASA's Mars Exloration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. Spirit took this succession of images at 150-second intervals from a perch atop 'Husband Hill' in Gusev Crater on martian day, or sol, 594 (Sept. 4, 2005), as the faster-moving martian moon Phobos was passing Deimos in the night sky. Phobos is the brighter object on the left and Deimos is the dimmer object on the right. The bright star Aldebaran and some other stars in the constellation Taurus are visible as star trails. Most of the other streaks in the image are the result of cosmic rays lighting up random groups of pixels in the camera. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this c omposite with its panoramic camera using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Cellular Neural Network for Real Time Image Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vagliasindi, G.; Arena, P.; Fortuna, L.
2008-03-12
Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information formore » plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)« less
Method and apparatus for calibrating a tiled display
NASA Technical Reports Server (NTRS)
Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.
Composite x-ray pinholes for time-resolved microphotography of laser compressed targets.
Attwood, D T; Weinstein, B W; Wuerker, R F
1977-05-01
Composite x-ray pinholes having dichroic properties are presented. These pinholes permit both x-ray imaging and visible alignment with micron accuracy by presenting different apparent apertures in these widely disparate regions of the spectrum. Their use is mandatory in certain applications in which the x-ray detection consists of a limited number of resolvable elements whose use one wishes to maximize. Mating the pinhole camera with an x-ray streaking camera is described, along with experiments which spatially and temporally resolve the implosion of laser irradiated targets.
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Impact Site: Cassini's Final Image
2017-09-15
This monochrome view is the last image taken by the imaging cameras on NASA's Cassini spacecraft. It looks toward the planet's night side, lit by reflected light from the rings, and shows the location at which the spacecraft would enter the planet's atmosphere hours later. A natural color view, created using images taken with red, green and blue spectral filters, is also provided (Figure 1). The imaging cameras obtained this view at approximately the same time that Cassini's visual and infrared mapping spectrometer made its own observations of the impact area in the thermal infrared. This location -- the site of Cassini's atmospheric entry -- was at this time on the night side of the planet, but would rotate into daylight by the time Cassini made its final dive into Saturn's upper atmosphere, ending its remarkable 13-year exploration of Saturn. The view was acquired on Sept. 14, 2017 at 19:59 UTC (spacecraft event time). The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 394,000 miles (634,000 kilometers) from Saturn. Image scale is about 11 miles (17 kilometers). The original image has a size of 512x512 pixels. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21895
Nondestructive defect detection in laser optical coatings
NASA Astrophysics Data System (ADS)
Marrs, C. D.; Porteus, J. O.; Palmer, J. R.
1985-03-01
Defects responsible for laser damage in visible-wavelength mirrors are observed at nondamaging intensities using a new video microscope system. Studies suggest that a defect scattering phenomenon combined with lag characteristics of video cameras makes this possible. Properties of the video-imaged light are described for multilayer dielectric coatings and diamond-turned metals.
Helms with laptop in Destiny laboratory module
2001-03-30
ISS002-E-5478 (30 March 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, works at a laptop computer in the U.S. Laboratory / Destiny module of the International Space Station (ISS). The Space Station Remote Manipulator System (SSRMS) control panel is visible to Helms' right. This image was recorded with a digital still camera.
Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer
NASA Astrophysics Data System (ADS)
Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin
2017-12-01
An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.
Non-Parametric Blur Map Regression for Depth of Field Extension.
D'Andres, Laurent; Salvador, Jordi; Kochale, Axel; Susstrunk, Sabine
2016-04-01
Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.
NASA Astrophysics Data System (ADS)
Chickadel, C. C.; Lindsay, R. W.; Clark, D.
2014-12-01
An uncooled thermal camera (microbolometer) and RGB camera were mounted in the tail section of a US Coast Guard HC-130 to observe sea ice, open water, and cloud tops through the open rear cargo doors during routine Arctic Domain Awareness (ADA) flights. Recent flights were conducted over the Beaufort Sea in June, July, and August of 2014, with flights planned for September and October. Thermal and visible images were collected at low altitude (100m) during times when the cargo doors were open and recorded high resolution information on ice floes, melt ponds, and surface temperature variability associated with the marginal ice zone (MIZ). These observations of sea ice conditions and surface water temperatures will be used to characterize floe size development and the temperature and albedo of ice ponds and leads. This information will allow for a detailed characterization of sea ice that can be used in process studies and for model evaluation, calibration of satellite remote sensing products, and initialization of sea ice prediction schemes.
Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie
2011-01-01
The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.
Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A
2014-11-01
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.
2016-10-03
Two tiny moons of Saturn, almost lost amid the planet's enormous rings, are seen orbiting in this image. Pan, visible within the Encke Gap near lower-right, is in the process of overtaking the slower Atlas, visible at upper-left. All orbiting bodies, large and small, follow the same basic rules. In this case, Pan (17 miles or 28 kilometers across) orbits closer to Saturn than Atlas (19 miles or 30 kilometers across). According to the rules of planetary motion deduced by Johannes Kepler over 400 years ago, Pan orbits the planet faster than Atlas does. This view looks toward the sunlit side of the rings from about 39 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 9, 2016. The view was acquired at a distance of approximately 3.4 million miles (5.5 million kilometers) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 71 degrees. Image scale is 21 miles (33 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20501
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D
Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...
2014-08-26
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less
Multiple Aspects of the Southern California Wildfires as Seen by NASA's AVIRIS
2017-12-15
NASA's Airborne Visible Infrared Imaging Spectrometer instrument (AVIRIS), flying aboard a NASA Armstrong Flight Research Center high-altitude ER-2 aircraft, observed wildfires burning in Southern California on Dec. 5-7, 2017. AVIRIS is an imaging spectrometer that observes light in visible and infrared wavelengths, measuring the full spectrum of radiated energy. Unlike regular cameras with three colors, AVIRIS has 224 spectral channels, measuring contiguously from the visible through the shortwave infrared. Data from these flights, compared against measurements acquired earlier in the year, show many ways this one instrument can improve both our understanding of fire risk and the response to fires in progress. The top row in this image compilation shows pre-fire data acquired from June 2017. At top left is a visible-wavelength image similar to what our own eyes would see. The top middle image is a map of surface composition based on analyzing the full electromagnetic spectrum, revealing green vegetated areas and non-photosynthetic vegetation that is potential fuel as well as non-vegetated surfaces that may slow an advancing fire. The image at top right is a remote measurement of the water in tree canopies, a proxy for how much moisture is in the vegetation. The bottom row in the compilation shows data acquired from the Thomas fire in progress in December 2017. At bottom left is a visible wavelength image. The bottom middle image is an infrared image, with red at 2,250 nanometers showing fire energy, green at 1,650 nanometers showing the surface through the smoke, and blue at 1,000 nanometers showing the smoke itself. The image at bottom right is a fire temperature map using spectroscopic analysis to measure fire thermal emission recorded in the AVIRIS spectra. https://photojournal.jpl.nasa.gov/catalog/PIA22194
View of Saudi Arabia and north eastern Africa from the Apollo 17 spacecraft
1972-12-09
AS17-148-22718 (7-19 Dec. 1972) --- This excellent view of Saudi Arabia and the north eastern portion of the African continent was photographed by the Apollo 17 astronauts with a hand-held camera on their trans-lunar coast toward man's last lunar visit. Egypt, Sudan, Ethiopia are some of the African nations are visible. Iran, Iraq, Jordan are not so clearly visible because of cloud cover and their particular location in the picture. India is dimly visible at right of frame. The Red Sea is seen entirely in this one single frame, a rare occurrence in Apollo photography or any photography taken from manned spacecraft. The Gulf of Suez, the Dead Sea, Gulf of Aden, Persian Gulf and Gulf of Oman are also visible. This frame is one of 169 frames on film magazine NN carried aboard Apollo 17, all of which are SO368 (color) film. A 250mm lens on a 70mm Hasselblad camera recorded the image, one of 92 taken during the trans-lunar coast. Note AS17-148-22727 (also magazine NN) for an excellent full Earth picture showing the entire African continent.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY
Cukierski, William J.; Qi, Xin; Foran, David J.
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral “cube” is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l’éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears. PMID:19997528
The PALM-3000 high-order adaptive optics system for Palomar Observatory
NASA Astrophysics Data System (ADS)
Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa
2008-07-01
Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.
An Automatic Image-Based Modelling Method Applied to Forensic Infography
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
Tracking Sunspots from Mars, April 2015 Animation
2015-07-10
This single frame from a sequence of six images of an animation shows sunspots as viewed by NASA Curiosity Mars rover from April 4 to April 15, 2015. From Mars, the rover was in position to see the opposite side of the sun. The images were taken by the right-eye camera of Curiosity's Mast Camera (Mastcam), which has a 100-millimeter telephoto lens. The view on the left of each pair in this sequence has little processing other than calibration and putting north toward the top of each frame. The view on the right of each pair has been enhanced to make sunspots more visible. The apparent granularity throughout these enhanced images is an artifact of this processing. These sunspots seen in this sequence eventually produced two solar eruptions, one of which affected Earth. http://photojournal.jpl.nasa.gov/catalog/PIA19802
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...
2016-05-16
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
Video System Highlights Hydrogen Fires
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.
1992-01-01
Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.
NASA Technical Reports Server (NTRS)
2005-01-01
This view shows the unlit face of Saturn's rings, visible via scattered and transmitted light. In these views, dark regions represent gaps and areas of higher particle densities, while brighter regions are filled with less dense concentrations of ring particles. The dim right side of the image contains nearly the entire C ring. The brighter region in the middle is the inner B ring, while the darkest part represents the dense outer B Ring. The Cassini Division and the innermost part of the A ring are at the upper-left. Saturn's shadow carves a dark triangle out of the lower right corner of this image. The image was taken in visible light with the Cassini spacecraft wide-angle camera on June 8, 2005, at a distance of approximately 433,000 kilometers (269,000 miles) from Saturn. The image scale is 22 kilometers (14 miles) per pixel. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Garcia, Jair E.; Greentree, Andrew D.; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G.
2014-01-01
Background The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Methodology/Principal findings Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Conclusion Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels. PMID:24827828
JunoCam: Science and Outreach Opportunities with Juno
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Orton, G. S.
2015-12-01
JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is "science in a fish bowl", with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we'll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are prepared to create image products. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.
1986-01-17
Range : 9.1 million miles (5.7 million miles) P-29478C These two images pictures of Uranus, one in true color and the other in false color, were shot by Voyager 2's narrow angle camera. The picture at left has been processed to show Uranus as the human eye would see from the vantage point of the spacecraft. The image is a composite of shots taken through blue, green, and orange filters. The darker shadings on the upper right of the disk correspond to day-night boundaries on the planet. Beyond this boundary lies the hidden northern hemisphere of Uranus, which currently remains in total darkness as the planet rotates. The blue-green color results from the aborption of red light by methane gas in Uranus' deep, cold, and remarkably clear atmosphere. The picture at right uses false color and extreme contrast to bring out subtle details in the polar region of Uranus. Images obtained through ultraviolet, violet, and orange filters were respectively converted to the same blue, green, and red colors used to produce the picture at left. The very slight contrasts visible in true color are greatly exaggerated here. In this false colr picture, Uranus reveals a dark polar hood surrounded by aseries of progressively lighter concentric bands. One possible explanation is that a brownish haze or smog, concentrated around the pole, is arranged into bands of zonal motions of the upper atmosphere. Several artifacts of the optics and processing are visible. The occasional donut shapes are shadows cast by dust in the camera optics;the processing needed to bring ot faint features also bring out camera blemishes. in addition, the bright pink strip at the lower edge of the planets limb is an artifact of the image enhancement. In fact, the limb is dark and uniform in color around the planet.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Development of low-cost high-performance multispectral camera system at Banpil
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.
2014-05-01
Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.
South Melea Planum, By The Dawn's Early Light
NASA Technical Reports Server (NTRS)
1999-01-01
MOC 'sees' by the dawn's early light! This picture was taken over the high southern polar latitudes during the first week of May 1999. The area shown is currently in southern winter darkness. Because sunlight is scattered over the horizon by aerosols--dust and ice particles--suspended in the atmosphere, sufficient light reaches regions within a few degrees of the terminator (the line dividing night and day) to be visible to the Mars Global Surveyor Mars Orbiter Camera (MOC) when the maximum exposure settings are used. This image shows a bright, wispy cloud hanging over southern Malea Planum. This cloud would not normally be visible, since it is currently in darkness. At the time this picture was taken, the sun was more than 5.7o below the northern horizon. The scene covers an area 3 kilometers (1.9 miles) wide. Again, the illumination is from the top. In this frame, the surface appears a relatively uniform gray. At the time the picture was acquired, the surface was covered with south polar wintertime frost. The highly reflective frost, in fact, may have contributed to the increased visibility of this surface. This 'twilight imaging' technique for viewing Mars can only work near the terminator; thus in early May only regions between about 67oS and 74oS were visible in twilight images in the southern hemisphere, and a similar narrow latitude range could be imaged in the northern hemisphere. MOC cannot 'see' in the total darkness of full-borne night. Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.NASA Astrophysics Data System (ADS)
Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.
2017-12-01
Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.
An Accreting Protoplanet: Confirmation and Characterization of LkCa15b
NASA Astrophysics Data System (ADS)
Follette, Katherine; Close, Laird; Males, Jared; Macintosh, Bruce; Sallum, Stephanie; Eisner, Josh; Kratter, Kaitlin M.; Morzinski, Katie; Hinz, Phil; Weinberger, Alycia; Rodigas, Timothy J.; Skemer, Andrew; Bailey, Vanessa; Vaz, Amali; Defrere, Denis; spalding, eckhart; Tuthill, Peter
2015-12-01
We present a visible light adaptive optics direct imaging detection of a faint point source separated by just 93 milliarcseconds (~15 AU) from the young star LkCa 15. Using Magellan AO's visible light camera in Simultaneous Differential Imaging (SDI) mode, we imaged the star at Hydrogen alpha and in the neighboring continuum as part of the Giant Accreting Protoplanet Survey (GAPplanetS) in November 2015. The continuum images provide a sensitive and simultaneous probe of PSF residuals and instrumental artifacts, allowing us to isolate H-alpha accretion luminosity from the LkCa 15b protoplanet, which lies well inside of the LkCa15 transition disk gap. This detection, combined with a nearly simultaneous near-infrared detection with the Large Binocular Telescope, provides an unprecedented glimpse at a planetary system during epoch of planet formation. [Nature result in press. Please embargo until released
Dust measurements in tokamaks (invited).
Rudakov, D L; Yu, J H; Boedo, J A; Hollmann, E M; Krasheninnikov, S I; Moyer, R A; Muller, S H; Pigarov, A Yu; Rosenberg, M; Smirnov, R D; West, W P; Boivin, R L; Bray, B D; Brooks, N H; Hyatt, A W; Wong, C P C; Roquemore, A L; Skinner, C H; Solomon, W M; Ratynskaia, S; Fenstermacher, M E; Groth, M; Lasnier, C J; McLean, A G; Stangeby, P C
2008-10-01
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers, visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 microm in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C(2) dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudakov, D. L.; Yu, J. H.; Boedo, J. A.
Dust production and accumulation present potential safety and operational issues for the ITER. Dust diagnostics can be divided into two groups: diagnostics of dust on surfaces and diagnostics of dust in plasma. Diagnostics from both groups are employed in contemporary tokamaks; new diagnostics suitable for ITER are also being developed and tested. Dust accumulation in ITER is likely to occur in hidden areas, e.g., between tiles and under divertor baffles. A novel electrostatic dust detector for monitoring dust in these regions has been developed and tested at PPPL. In the DIII-D tokamak dust diagnostics include Mie scattering from Nd:YAG lasers,more » visible imaging, and spectroscopy. Laser scattering is able to resolve particles between 0.16 and 1.6 {mu}m in diameter; using these data the total dust content in the edge plasmas and trends in the dust production rates within this size range have been established. Individual dust particles are observed by visible imaging using fast framing cameras, detecting dust particles of a few microns in diameter and larger. Dust velocities and trajectories can be determined in two-dimension with a single camera or three-dimension using multiple cameras, but determination of particle size is challenging. In order to calibrate diagnostics and benchmark dust dynamics modeling, precharacterized carbon dust has been injected into the lower divertor of DIII-D. Injected dust is seen by cameras, and spectroscopic diagnostics observe an increase in carbon line (CI, CII, C{sub 2} dimer) and thermal continuum emissions from the injected dust. The latter observation can be used in the design of novel dust survey diagnostics.« less
Can we match ultraviolet face images against their visible counterparts?
NASA Astrophysics Data System (ADS)
Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.
2015-05-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
Spitzer Makes 'Invisible' Visible
NASA Technical Reports Server (NTRS)
2004-01-01
Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon.Real-time millimeter-wave imaging radiometer for avionic synthetic vision
NASA Astrophysics Data System (ADS)
Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.
1994-07-01
ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.
2015-10-15
NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A companion view from the wide-angle camera (PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but high-resolution Cassini images show a landscape of stark contrasts. Thin cracks cross over the pole -- the northernmost extent of a global system of such fractures. Before this Cassini flyby, scientists did not know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 4,000 miles (6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19660
Utilizing the Southwest Ultraviolet Imaging System (SwUIS) on the International Space Station
NASA Astrophysics Data System (ADS)
Schindhelm, Eric; Stern, S. Alan; Ennico-Smith, Kimberly
2013-09-01
We present the Southwest Ultraviolet Imaging System (SwUIS), a compact, low-cost instrument designed for remote sensing observations from a manned platform in space. It has two chief configurations; a high spatial resolution mode with a 7-inch Maksutov-Cassegrain telescope, and a large field-of-view camera mode using a lens assembly. It can operate with either an intensified CCD or an electron multiplying CCD camera. Interchangeable filters and lenses enable broadband and narrowband imaging at UV/visible/near-infrared wavelengths, over a range of spatial resolution. SwUIS has flown previously on Space Shuttle flights STS-85 and STS-93, where it recorded multiple UV images of planets, comets, and vulcanoids. We describe the instrument and its capabilities in detail. The SWUIS's broad wavelength coverage and versatile range of hardware configurations make it an attractive option for use as a facility instrument for Earth science and astronomical imaging investigations aboard the International Space Station.
New Orleans after Hurricane Katrina
2005-09-08
JSC2005e37990 (8 September 2005) --- Flooding of large sections of I-610 and the I-610/I-10 interchange (center) are visible to the east of the 17th Street Canal in this image acquired on September 8, 2005 from the International Space Station. Flooded regions are dark greenish brown, while dry areas are light brown to tan. North is to top of image, which was cropped from the digital still camera's original frame, ISS011-E-12527.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
Daylight coloring for monochrome infrared imagery
NASA Astrophysics Data System (ADS)
Gabura, James
2015-05-01
The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
NASA Astrophysics Data System (ADS)
Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi
2013-06-01
Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang's method [18].
2017-09-15
This view of Saturn's A ring features a lone "propeller" -- one of many such features created by small moonlets embedded in the rings as they attempt, unsuccessfully, to open gaps in the ring material. The image was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back to Earth. The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 420,000 miles (676,000 kilometers) from Saturn. Image scale is 2.3 miles (3.7 kilometers). https://photojournal.jpl.nasa.gov/catalog/PIA21894
High speed spectral measurements of IED detonation fireballs
NASA Astrophysics Data System (ADS)
Gordon, J. Motos; Spidell, Matthew T.; Pitz, Jeremey; Gross, Kevin C.; Perram, Glen P.
2010-04-01
Several homemade explosives (HMEs) were manufactured and detonated at a desert test facility. Visible and infrared signatures were collected using two Fourier transformspectrometers, two thermal imaging cameras, a radiometer, and a commercial digital video camera. Spectral emissions from the post-detonation combustion fireball were dominated by continuum radiation. The events were short-lived, decaying in total intensity by an order of magnitude within approximately 300ms after detonation. The HME detonation produced a dust cloud in the immediate area that surrounded and attenuated the emitted radiation from the fireball. Visible imagery revealed a dark particulate (soot) cloud within the larger surrounding dust cloud. The ejected dust clouds attenuated much of the radiation from the post-detonation combustion fireballs, thereby reducing the signal-to-noise ratio. The poor SNR at later times made it difficult to detect selective radiation from by-product gases on the time scale (~500ms) in which they have been observed in other HME detonations.
7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE ...
7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE FROM THE 'NOTTINGHAM' SADDLE VISIBLE IN PHOTOGRAPHS ID-31-3 AND ID-31-6. CAMERA POINTED NORTHEAST TIP TOP IS CLEARLY VISIBLE IN UPPER RIGHT; RUNNING A STRAIGHT EDGE THROUGH THE TRUNK LINE OF SMALL TREE IN LOWER RIGHT THROUGH TRUNK LINE OF LARGER TREE WILL DIRECT ONE TO LIGHT AREA WHERE TIP TOP IS LOCATED; BLACK SQUARE IS THE RIGHT WINDOW ON WEST SIDE (FRONT) OF STRUCTURE. PHILLIPS IS VISIBLE BY FOLLOWING TREE LINE DIAGONALLY THROUGH IMAGE TO FAR LEFT SIDE. SULLIVAN IS HIDDEN IN THE TREE TO THE RIGHT OF PHILLIPS. - Florida Mountain Mining Sites, Silver City, Owyhee County, ID
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
Acoustic holography: Problems associated with construction and reconstruction techniques
NASA Technical Reports Server (NTRS)
Singh, J. J.
1978-01-01
The implications of the difference between the inspecting and interrogating radiations are discussed. For real-time, distortionless, sound viewing, it is recommended that infrared radiation of wavelength comparable to the inspecting sound waves be used. The infrared images can be viewed with (IR visible) converter phosphors. The real-time display of the visible image of the acoustically-inspected object at low sound levels such as are used in medical diagnosis is evaluated. In this connection attention is drawn to the need for a phosphor screen which is such that its optical transmission at any point is directly related to the incident electron beam intensity at that point. Such a screen, coupled with an acoustical camera, can enable instantaneous sound wave reconstruction.
Rainbow correlation imaging with macroscopic twin beam
NASA Astrophysics Data System (ADS)
Allevi, Alessia; Bondani, Maria
2017-06-01
We present the implementation of a correlation-imaging protocol that exploits both the spatial and spectral correlations of macroscopic twin-beam states generated by parametric downconversion. In particular, the spectral resolution of an imaging spectrometer coupled to an EMCCD camera is used in a proof-of-principle experiment to encrypt and decrypt a simple code to be transmitted between two parties. In order to optimize the trade-off between visibility and resolution, we provide the characterization of the correlation images as a function of the spatio-spectral properties of twin beams generated at different pump power values.
1989-08-21
Range : 4.8 million km. ( 3 million miles ) P-34648 This Voyager 2, sixty-one second exposure, shot through clear filters, of Neptunes rings. The Voyager cameras were programmed to make a systematic search of the entire ring system for new material. The previously ring arc is visible as a long bright streak at the bottom of the image. Extening beyond the bright arc is a much fainter component which follows the arc in its orbit. this faint material was also visible leading the ring arc and, in total, covers at least half of the orbit before it becomes too faint to identify. Also visible in this image, is a continuous ring of faint material previously identified as a possible ring arc by Voyager. this continuous ring is located just outside the orbit of the moon 1989N3, which was also discovered by Voyager. This moon is visible as a streak in the lower left. the smear of 1989N3 is due to its own orbital motion during the exposure. Extreme computer processing of this image was made to enhance the extremely faint features of Neptunes moon system. the dark area surrounding the moon as well as the bright corners are due to this special processing.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Arakawa, Takahiro; Sato, Toshiyuki; Iitani, Kenta; Toma, Koji; Mitsubayashi, Kohji
2017-04-18
Various volatile organic compounds can be found in human transpiration, breath and body odor. In this paper, a novel two-dimensional fluorometric imaging system, known as a "sniffer-cam" for ethanol vapor released from human breath and palm skin was constructed and validated. This imaging system measures ethanol vapor concentrations as intensities of fluorescence through an enzymatic reaction induced by alcohol dehydrogenase (ADH). The imaging system consisted of multiple ultraviolet light emitting diode (UV-LED) excitation sheet, an ADH enzyme immobilized mesh substrate and a high-sensitive CCD camera. This imaging system uses ADH for recognition of ethanol vapor. It measures ethanol vapor by measuring fluorescence of nicotinamide adenine dinucleotide (NADH), which is produced by an enzymatic reaction on the mesh. This NADH fluorometric imaging system achieved the two-dimensional real-time imaging of ethanol vapor distribution (0.5-200 ppm). The system showed rapid and accurate responses and a visible measurement, which could lead to an analysis of metabolism function at real time in the near future.
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
Opportunity Landing Spot Panorama (3-D Model)
NASA Technical Reports Server (NTRS)
2004-01-01
The rocky outcrop traversed by the Mars Exploration Rover Opportunity is visible in this three-dimensional model of the rover's landing site. Opportunity has acquired close-up images along the way, and scientists are using the rover's instruments to closely examine portions of interest. The white fragments that look crumpled near the center of the image are portions of the airbags. Distant scenery is displayed on a spherical backdrop or 'billboard' for context. Artifacts near the top rim of the crater are a result of the transition between the three-dimensional model and the billboard. Portions of the terrain model lacking sufficient data appear as blank spaces or gaps, colored reddish-brown for better viewing. This image was generated using special software from NASA's Ames Research Center and a mosaic of images taken by the rover's panoramic camera.
[figure removed for brevity, see original site] Click on image for larger view The rocky outcrop traversed by the Mars Exploration Rover Opportunity is visible in this zoomed-in portion of a three-dimensional model of the rover's landing site. Opportunity has acquired close-up images along the way, and scientists are using the rover's instruments to closely examine portions of interest. The white fragments that look crumpled near the center of the image are portions of the airbags. Distant scenery is displayed on a spherical backdrop or 'billboard' for context. Artifacts near the top rim of the crater are a result of the transition between the three-dimensional model and the billboard. Portions of the terrain model lacking sufficient data appear as blank spaces or gaps, colored reddish-brown for better viewing. This image was generated using special software from NASA's Ames Research Center and a mosaic of images taken by the rover's panoramic camera.Impediment to Spirit Drive on Sol 1806
NASA Technical Reports Server (NTRS)
2009-01-01
The hazard avoidance camera on the front of NASA's Mars Exploration Rover Spirit took this image after a drive by Spirit on the 1,806th Martian day, or sol, (January 31, 2009) of Spirit's mission on the surface of Mars. The wheel at the bottom right of the image is Spirit's right-front wheel. Because that wheel no longer turns, Spirit drives backwards dragging that wheel. The drive on Sol 1806 covered about 30 centimeters (1 foot). The rover team had planned a longer drive, but Spirit stopped short, apparently from the right front wheel encountering the partially buried rock visible next to that wheel. The hazard avoidance cameras on the front and back of the rover provide wide-angle views. The hill on the horizon in the right half of this image is Husband Hill. Spirit reached the summit of Husband Hill in 2005.Automatic panoramic thermal integrated sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.
2005-05-01
Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.
Balancing Science Objectives and Operational Constraints: A Mission Planner's Challenge
NASA Technical Reports Server (NTRS)
Weldy, Michelle
1996-01-01
The Air Force minute sensor technology integration (MSTI-3) satellite's primary mission is to characterize Earth's atmospheric background clutter. MSTI-3 will use three cameras for data collection, a mid-wave infrared imager, a short wave infrared imager, and a visible imaging spectrometer. Mission science objectives call for the collection of over 2 million images within the one year mission life. In addition, operational constraints limit camera usage to four operations of twenty minutes per day, with no more than 10,000 data and calibrating images collected per day. To balance the operational constraints and science objectives, the mission planning team has designed a planning process to e event schedules and sensor operation timelines. Each set of constraints, including spacecraft performance capabilities, the camera filters, the geographical regions, and the spacecraft-Sun-Earth geometries of interest, and remote tracking station deconflictions has been accounted for in this methodology. To aid in this process, the mission planning team is building a series of tools from commercial off-the-shelf software. These include the mission manifest which builds a daily schedule of events, and the MSTI Scene Simulator which helps build geometrically correct scans. These tools provide an efficient, responsive, and highly flexible architecture that maximizes data collection while minimizing mission planning time.
A dual-band adaptor for infrared imaging.
McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L
2012-05-01
A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.
NASA Technical Reports Server (NTRS)
2002-01-01
Like dancers pirouetting in opposite directions, the rotational patterns of two different tropical storms are contrasted in this pair of Multi-angle Imaging Spectroradiometer (MISR) nadir-camera images. The left-hand image is of Tropical Storm Bud, acquired on June 17, 2000 (Terra orbit 2656) as the storm was dissipating. Bud was situated in the eastern Pacific Ocean between Socorro Island and the southern tip of Baja California. South of the storm's center is a vortex pattern caused by obstruction of the prevailing flow by tiny Socorro Island. Sonora, Mexico and Baja California are visible at the top of the image. The right-hand image is of Tropical Cyclone Dera, acquired on March 12, 2001. Dera was located in the Indian Ocean, south of Madagascar. The southern end of this large island is visible in the top portion of this image. Northern hemisphere tropical storms, like Bud, rotate in a counterclockwise direction, whereas those in the southern hemisphere, such as Dera, rotate clockwise. The opposite spins are a consequence of Earth's rotation. Each image covers a swath approximately 380 kilometers wide. Image courtesy NASA/JPL/GSFC/LaRC, MISR Team
Visualization of Subsurface Defects in Composites using a Focal Plane Array Infrared Camera
NASA Technical Reports Server (NTRS)
Plotnikov, Yuri A.; Winfree, William P.
1999-01-01
A technique for enhanced defect visualization in composites via transient thermography is presented in this paper. The effort targets automated defect map construction for multiple defects located in the observed area. Experimental data were collected on composite panels of different thickness with square inclusions and flat bottom holes of different depth and orientation. The time evolution of the thermal response and spatial thermal profiles are analyzed. The pattern generated by carbon fibers and the vignetting effect of the focal plane array camera make defect visualization difficult. An improvement of the defect visibility is made by the pulse phase technique and the spatial background treatment. The relationship between a size of a defect and its reconstructed image is analyzed as well. The image processing technique for noise reduction is discussed.
Off-axis digital holographic camera for quantitative phase microscopy.
Monemhaghdoust, Zahra; Montfort, Frédéric; Emery, Yves; Depeursinge, Christian; Moser, Christophe
2014-06-01
We propose and experimentally demonstrate a digital holographic camera which can be attached to the camera port of a conventional microscope for obtaining digital holograms in a self-reference configuration, under short coherence illumination and in a single shot. A thick holographic grating filters the beam containing the sample information in two dimensions through diffraction. The filtered beam creates the reference arm of the interferometer. The spatial filtering method, based on the high angular selectivity of the thick grating, reduces the alignment sensitivity to angular displacements compared with pinhole based Fourier filtering. The addition of a thin holographic grating alters the coherence plane tilt introduced by the thick grating so as to create high-visibility interference over the entire field of view. The acquired full-field off-axis holograms are processed to retrieve the amplitude and phase information of the sample. The system produces phase images of cheek cells qualitatively similar to phase images extracted with a standard commercial DHM.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA DETAILS
NASA Technical Reports Server (NTRS)
2002-01-01
These are two views of a highly active region of star birth located northeast of the central cluster, R136, in 30 Doradus. The orientation and scale are identical for both views. The top panel is a composite of images in two colors taken with the Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The bottom panel is a composite of pictures taken through three infrared filters with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). In both cases the colors of the displays were chosen to correlate with the nebula's and stars' true colors. Seven very young objects are identified with numbered arrows in the infrared image. Number 1 is a newborn, compact cluster dominated by a triple system of 'hefty' stars. It has formed within the head of a massive dust pillar pointing toward R136. The energetic outflows from R136 have shaped the pillar and triggered the collapse of clouds within its summit to form the new stars. The radiation and outflows from these new stars have in turn blown off the top of the pillar, so they can be seen in the visible-light as well as the infrared image. Numbers 2 and 3 also pinpoint newborn stars or stellar systems inside an adjacent, bright-rimmed pillar, likewise oriented toward R136. These objects are still immersed within their natal dust and can be seen only as very faint, red points in the visible-light image. They are, however, among the brightest objects in the infrared image, since dust does not block infrared light as much as visible light. Thus, numbers 2 and 3 and number 1 correspond respectively to two successive stages in the birth of massive stars. Number 4 is a very red star that has just formed within one of several very compact dust clouds nearby. Number 5 is another very young triple-star system with a surrounding cluster of fainter stars. They also can be seen in the visible-light picture. Most remarkable are the glowing patches numbered 6 and 7, which astronomers have interpreted as 'impact points' produced by twin jets of material slamming into surrounding dust clouds. These 'impact points' are perfectly aligned on opposite sides of number 5 (the triple-star system), and each is separated from the star system by about 5 light-years. The jets probably originate from a circumstellar disk around one of the young stars in number 5. They may be rotating counterclockwise, thus producing moving, luminous patches on the surrounding dust, like a searchlight creating spots on clouds. These infrared patches produced by jets from a massive, young star are a new astronomical phenomenon. Credits for NICMOS image: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Southern Florida's River of Grass
NASA Technical Reports Server (NTRS)
2002-01-01
Florida's Everglades is a region of broad, slow-moving sheets of water flowing southward over low-lying areas from Lake Okeechobeeto the Gulf of Mexico. In places this remarkable 'river of grass' is 80 kilometers wide. These images from the Multi-angle Imaging SpectroRadiometer show the Everglades region on January 16, 2002. Each image covers an area measuring 191 kilometers x 205 kilometers. The data were captured during Terra orbit 11072.On the left is a natural color view acquired by MISR's nadir camera. A portion of Lake Okeechobee is visible at the top, to the right of image center. South of the lake, whose name derives from the Seminole word for 'big water,' an extensive region of farmland known as the Everglades Agricultural Area is recognizable by its many clustered squares. Over half of the sugar produced in United States is grown here. Urban areas along the east coast and in the northern part of the image extend to the boundaries of Big Cypress Swamp, situated north of Everglades National Park.The image on the right combines red-band data from the 46-degree backward, nadir and 46-degree forward-viewing camera angles to create a red, green, blue false-color composite. One of the interesting uses of the composite image is for detecting surface water. Wet surfaces appear blue in this rendition because sun glitter produces a greater signal at the forward camera's view angle. Wetlands visible in these images include a series of shallow impoundments called Water Conservation Areas which were built to speed water flow through the Everglades in times of drought. In parts of the Everglades, these levees and extensive systems such as the Miami and Tamiami Canals have altered the natural cycles of water flow. For example, the water volume of the Shark River Slough, a natural wetland which feeds Everglades National Park, is influenced by the Tamiami Canal. The unique and intrinsic value of the Everglades is now widely recognized, and efforts to restore the natural water cycles are underway.2009-04-16
ISS019-E-007253 (16 April 2009) --- Astronaut Michael Barratt, Expedition 19/20 flight engineer, performs Agricultural Camera (AgCam) setup and activation in the Destiny laboratory of the International Space Station. AgCam takes frequent images, in visible and infrared light, of vegetated areas on Earth, such as farmland, rangeland, grasslands, forests and wetlands in the northern Great Plains and Rocky Mountain regions of the United States. Images will be delivered directly to requesting farmers, ranchers, foresters, natural resource managers and tribal officials to help improve environmental stewardship.
Imaging method for monitoring delivery of high dose rate brachytherapy
Weisenberger, Andrew G; Majewski, Stanislaw
2012-10-23
A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
NASA Technical Reports Server (NTRS)
2005-01-01
Here is the martian twilight sky at Gusev crater, as imaged by the panoramic camera on NASA's Mars Exploration Rover Spirit around 6:20 in the evening of the rover's 464th martian day, or sol (April 23, 2005). Spirit was commanded to stay awake briefly after sending that sol's data to Mars Odyssey at sunset. This small panorama of the western sky was obtained using camera's 750-nanometer, 530-nanometer and 430-nanometer color filters. This filter combination allows false color images to be generated that are similar to what a human would see, but with the colors exaggerated. In this image, the bluish glow in the sky above where the Sun had just set would be visible to us if we were there, but the redness of the sky farther from the sunset is exaggerated compared to the daytime colors of the martian sky. These kinds of images are beautiful and evocative, but they also have important scientific purposes. Specifically, twilight images are occasionally acquired by the science team to determine how high into the atmosphere the martian dust extends, and to look for dust or ice clouds. Other images have shown that the twilight glow remains visible, but increasingly fainter, for up to two hours before sunrise or after sunset. The long martian twilight compared to Earth's is caused by sunlight scattered around to the night side of the planet by abundant high altitude dust. Similar long twilights or extra-colorful sunrises and sunsets sometimes occur on Earth when tiny dust grains that are erupted from powerful volcanoes scatter light high in the atmosphere. These kinds of twilight images are also more sensitive to faint cloud structures, though none were detected when these images were acquired. Clouds have been rare at Gusev crater during Spirit's 16-month mission so far.HUBBLE SPIES BROWN DWARFS IN NEARBY STELLAR NURSERY
NASA Technical Reports Server (NTRS)
2002-01-01
Probing deep within a neighborhood stellar nursery, NASA's Hubble Space Telescope uncovered a swarm of newborn brown dwarfs. The orbiting observatory's near-infrared camera revealed about 50 of these objects throughout the Orion Nebula's Trapezium cluster [image at right], about 1,500 light-years from Earth. Appearing like glistening precious stones surrounding a setting of sparkling diamonds, more than 300 fledgling stars and brown dwarfs surround the brightest, most massive stars [center of picture] in Hubble's view of the Trapezium cluster's central region. All of the celestial objects in the Trapezium were born together in this hotbed of star formation. The cluster is named for the trapezoidal alignment of those central massive stars. Brown dwarfs are gaseous objects with masses so low that their cores never become hot enough to fuse hydrogen, the thermonuclear fuel stars like the Sun need to shine steadily. Instead, these gaseous objects fade and cool as they grow older. Brown dwarfs around the age of the Sun (5 billion years old) are very cool and dim, and therefore are difficult for telescopes to find. The brown dwarfs discovered in the Trapezium, however, are youngsters (1 million years old). So they're still hot and bright, and easier to see. This finding, along with observations from ground-based telescopes, is further evidence that brown dwarfs, once considered exotic objects, are nearly as abundant as stars. The image and results appear in the Sept. 20 issue of the Astrophysical Journal. The brown dwarfs are too dim to be seen in a visible-light image taken by the Hubble telescope's Wide Field and Planetary Camera 2 [picture at left]. This view also doesn't show the assemblage of infant stars seen in the near-infrared image. That's because the young stars are embedded in dense clouds of dust and gas. The Hubble telescope's near-infrared camera, the Near Infrared Camera and Multi-Object Spectrometer, penetrated those clouds to capture a view of those objects. The brown dwarfs are the faintest objects in the image. Surveying the cluster's central region, the Hubble telescope spied brown dwarfs with masses equaling 10 to 80 Jupiters. Researchers think there may be less massive brown dwarfs that are beyond the limits of Hubble's vision. The near-infrared image was taken Jan. 17, 1998. Two near-infrared filters were used to obtain information on the colors of the stars at two wavelengths (1.1 and 1.6 microns). The Trapezium picture is 1 light-year across. This composite image was made from a 'mosaic' of nine separate, but adjoining images. In this false-color image, blue corresponds to warmer, more massive stars, and red to cooler, less massive stars and brown dwarfs, and stars that are heavily obscured by dust. The visible-light data were taken in 1994 and 1995. Credits for near-infrared image: NASA; K.L. Luhman (Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.); and G. Schneider, E. Young, G. Rieke, A. Cotera, H. Chen, M. Rieke, R. Thompson (Steward Observatory, University of Arizona, Tucson, Ariz.) Credits for visible-light picture: NASA, C.R. O'Dell and S.K. Wong (Rice University)
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
NASA Astrophysics Data System (ADS)
Egli, Pascal; Mankoff, Ken; Mettra, François; Lane, Stuart
2017-04-01
This study investigates the application of feature tracking algorithms to monitoring of glacier uplift. Several publications have confirmed the occurrence of an uplift of the glacier surface in the late morning hours of the mid to late ablation season. This uplift is thought to be caused by high sub-glacial water pressures at the onset of melt caused by overnight-deposited sediment that blocks subglacial channels. We use time-lapse images from a camera mounted in front of the glacier tongue of Haut Glacier d'Arolla during August 2016 in combination with a Digital Elevation Model and GPS measurements in order to investigate the phenomenon of glacier uplift using the feature tracking toolbox ImGRAFT. Camera position is corrected for all images and the images are geo-rectified using Ground Control Points visible in every image. Changing lighting conditions due to different sun angles create substantial noise and complicate the image analysis. A small glacier uplift of the order of 5 cm over a time span of 3 hours may be observed on certain days, confirming previous research.
2014-06-07
ISS040-E-008307 (7 June 2014) --- One of the members of the Expedition 40 crew aboard the International Space Station aimed a camera "around" the docked Russian Soyuz vehicle to record this night image of the United Arab Emirates. Dubai (center) and Abu Dhabi (left) are easily identified. The Straits of Hormuz are at right and the coast of Iran is barely visible in upper right.
Quantifying Forest Ground Flora Biomass Using Close-range Remote Sensing
Paul F. Doruska; Robert C. Weih; Matthew D. Lane; Don C. Bragg
2005-01-01
Close-range remote sensing was used to estimate biomass of forest ground flora in Arkansas. Digital images of a series of 1-m² plots were taken using Kodak DCS760 and Kodak DCS420CIR digital cameras. ESRI ArcGIS and ERDAS Imagine® software was used to calculate the Normalized Difference Vegetation Index (NDVI) and the Average Visible...
PRIMAS: a real-time 3D motion-analysis system
NASA Astrophysics Data System (ADS)
Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans
1994-03-01
The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenbuerger, S.; Brandt, C.; Brochard, F.
2010-06-15
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less
Impact of New Camera Technologies on Discoveries in Cell Biology.
Stuurman, Nico; Vale, Ronald D
2016-08-01
New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.
NASA Astrophysics Data System (ADS)
Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.
2010-06-01
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilbert, B.; Chiaberge, M.; Kotyla, J. P.
2016-07-01
We present new rest-frame UV and visible observations of 22 high- z (1 < z < 2.5) 3C radio galaxies and QSOs obtained with the Hubble Space Telescope ’s Wide Field Camera 3 instrument. Using a custom data reduction strategy in order to assure the removal of cosmic rays, persistence signal, and other data artifacts, we have produced high-quality science-ready images of the targets and their local environments. We observe targets with regions of UV emission suggestive of active star formation. In addition, several targets exhibit highly distorted host galaxy morphologies in the rest frame visible images. Photometric analyses revealmore » that brighter QSOs generally tend to be redder than their dimmer counterparts. Using emission line fluxes from the literature, we estimate that emission line contamination is relatively small in the rest frame UV images for the QSOs. Using archival VLA data, we have also created radio map overlays for each of our targets, allowing for analysis of the optical and radio axes alignment.« less
Verri, G
2009-06-01
The photo-induced luminescence properties of Egyptian blue, Han blue and Han purple were investigated by means of near-infrared digital imaging. These pigments emit infrared radiation when excited in the visible range. The emission can be recorded by means of a modified commercial digital camera equipped with suitable glass filters. A variety of visible light sources were investigated to test their ability to excite luminescence in the pigments. Light-emitting diodes, which do not emit stray infrared radiation, proved an excellent source for the excitation of luminescence in all three compounds. In general, the use of visible radiation emitters with low emission in the infrared range allowed the presence of the pigments to be determined and their distribution to be spatially resolved. This qualitative imaging technique can be easily applied in situ for a rapid characterisation of materials. The results were compared to those for Egyptian green and for historical and modern blue pigments. Examples of the application of the technique on polychrome works of art are presented.
True Ortho Generation of Urban Area Using High Resolution Aerial Photos
NASA Astrophysics Data System (ADS)
Hu, Yong; Stanley, David; Xin, Yubin
2016-06-01
The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Prasad, Ankush; Pospíšil, Pavel
2012-08-01
Solar radiation that reaches Earth's surface can have severe negative consequences for organisms. Both visible light and ultraviolet A (UVA) radiation are known to initiate the formation of reactive oxygen species (ROS) in human skin by photosensitization reactions (types I and II). In the present study, we investigated the role of visible light and UVA radiation in the generation of ROS on the dorsal and the palmar side of a hand. The ROS are known to oxidize biomolecules such as lipids, proteins, and nucleic acids to form electronically excited species, finally leading to ultraweak photon emission. We have employed a highly sensitive charge coupled device camera and a low-noise photomultiplier tube for detection of two-dimensional and one-dimensional ultraweak photon emission, respectively. Our experimental results show that oxidative stress is generated by the exposure of human skin to visible light and UVA radiation. The oxidative stress generated by UVA radiation is claimed to be significantly higher than that by visible light. Two-dimensional photon imaging can serve as a potential tool for monitoring the oxidative stress in the human skin induced by various stress factors irrespective of its physical or chemical nature.
Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.
Spitzer Makes Invisible Visible
2004-04-13
Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon. http://photojournal.jpl.nasa.gov/catalog/PIA05734
Southern California Wildfires Observed by NASA MISR
2016-06-24
The Los Angeles area is currently suffering the effects of three major wildfires that are blanketing the area with smoke. Over the past few days, Southern California has experienced record-breaking temperatures, topping 110 degrees Fahrenheit in some cities. The heat, in combination with offshore winds, helped to stoke the Sherpa Fire west of Santa Barbara, which has been burning since June 15, 2016. Over the weekend of June 18-19, this fire rapidly expanded in size, forcing freeway closures and evacuations of campgrounds and state beaches. On Monday, June 20, two new fires ignited in the San Gabriel Mountains north of Azusa and Duarte, together dubbed the San Gabriel Complex Fire. They have burned more than 4,900 acres since June 20, sending up plumes of smoke visible to many in the Los Angeles basin and triggering air quality warnings. More than 1,400 personnel have been battling the blazes in the scorching heat, and evacuations were ordered for neighborhoods in the foothills. On June 21, the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite captured this view of the San Gabriel Mountains and Los Angeles Basin from its 46-degree forward-viewing camera, which enhances the visibility of the smoke compared to the more conventional nadir (vertical) view. The width of this image is about 75 miles (120 kilometers) across. Smoke from the San Gabriel Complex Fire is visible at the very right of the image. Stereoscopic analysis of MISR's multiple camera angles is used to compute the height of the smoke plume from the San Gabriel Complex Fire. In the right-hand image, these heights are superimposed on the underlying image. The color scale shows that the plume is not much higher than the surrounding mountains. As a result, much of the smoke is confined to the local area. http://photojournal.jpl.nasa.gov/catalog/PIA20718
Physics-based approach to color image enhancement in poor visibility conditions.
Tan, K K; Oakley, J P
2001-10-01
Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.
JunoCam: Outreach and Science Opportunities
NASA Astrophysics Data System (ADS)
Hansen, Candice; Ingersoll, Andy; Caplinger, Mike; Ravine, Mike; Orton, Glenn
2014-11-01
JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno’s elliptical polar orbit will offer unique views of Jupiter’s polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Outer irregular satellites and small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is “science in a fish bowl”, with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we’ll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are onboard to create image products. Many of the earth flyby image products from Juno’s earth gravity assist were processed by amateurs. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.
2017-11-27
These two images illustrate just how far Cassini traveled to get to Saturn. On the left is one of the earliest images Cassini took of the ringed planet, captured during the long voyage from the inner solar system. On the right is one of Cassini's final images of Saturn, showing the site where the spacecraft would enter the atmosphere on the following day. In the left image, taken in 2001, about six months after the spacecraft passed Jupiter for a gravity assist flyby, the best view of Saturn using the spacecraft's high-resolution (narrow-angle) camera was on the order of what could be seen using the Earth-orbiting Hubble Space Telescope. At the end of the mission (at right), from close to Saturn, even the lower resolution (wide-angle) camera could capture just a tiny part of the planet. The left image looks toward Saturn from 20 degrees below the ring plane and was taken on July 13, 2001 in wavelengths of infrared light centered at 727 nanometers using the Cassini spacecraft narrow-angle camera. The view at right is centered on a point 6 degrees north of the equator and was taken in visible light using the wide-angle camera on Sept. 14, 2017. The view on the left was acquired at a distance of approximately 317 million miles (510 million kilometers) from Saturn. Image scale is about 1,900 miles (3,100 kilometers) per pixel. The view at right was acquired at a distance of approximately 360,000 miles (579,000 kilometers) from Saturn. Image scale is 22 miles (35 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21353
True 3-D View of 'Columbia Hills' from an Angle
NASA Technical Reports Server (NTRS)
2004-01-01
This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.' The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Context image for PIA03648 Ascraeus Mons After examining numerous THEMIS images and using the JMars targeting software, eighth grade students from Charleston Middle School in Charleston, IL, selected the location of -8.37N and 276.66E for capture by the THEMIS visible camera during Mars Odyssey's sixth orbit of Mars on Nov. 22, 2005. The students are investigating relationships between channels, craters, and basins on Mars. The Charleston Middle School students participated in the Mars Student Imaging Project (MSIP) and submitted a proposal to use the THEMIS visible camera. Image information: VIS instrument. Latitude 8.8S, Longitude 279.6E. 17 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments
NASA Astrophysics Data System (ADS)
Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete
2002-04-01
New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.
2015-10-08
Pluto's haze layer shows its blue color in this picture taken by the New Horizons Ralph/Multispectral Visible Imaging Camera (MVIC). The high-altitude haze is thought to be similar in nature to that seen at Saturn's moon Titan. The source of both hazes likely involves sunlight-initiated chemical reactions of nitrogen and methane, leading to relatively small, soot-like particles (called tholins) that grow as they settle toward the surface. This image was generated by software that combines information from blue, red and near-infrared images to replicate the color a human eye would perceive as closely as possible. http://photojournal.jpl.nasa.gov/catalog/PIA19964
Device and Method of Scintillating Quantum Dots for Radiation Imaging
NASA Technical Reports Server (NTRS)
Burke, Eric R. (Inventor); DeHaven, Stanton L. (Inventor); Williams, Phillip A. (Inventor)
2017-01-01
A radiation imaging device includes a radiation source and a micro structured detector comprising a material defining a surface that faces the radiation source. The material includes a plurality of discreet cavities having openings in the surface. The detector also includes a plurality of quantum dots disclosed in the cavities. The quantum dots are configured to interact with radiation from the radiation source, and to emit visible photons that indicate the presence of radiation. A digital camera and optics may be used to capture images formed by the detector in response to exposure to radiation.
Measuring visibility using smartphones
NASA Astrophysics Data System (ADS)
Friesen, Jan; Bialon, Raphael; Claßen, Christoph; Graffi, Kalman
2017-04-01
Spatial information on fog density is an important parameter for ecohydrological studies in cloud forests. The Dhofar cloud forest in Southern Oman exhibits a close interaction between the fog, trees, and rainfall. During the three month monsoon season the trees capture substantial amounts of horizontal precipitation from fog which increases net precipitation below the tree canopy. As fog density measurements are scarce, a smartphone app was designed to measure visibility. Different smartphone units use a variety of different parts. It is therefore important to assess the developed visibility measurement across a suite of different smartphones. In this study we tested five smartphones/ tablets (Google/ LG Nexus 5X, Huawei P8 lite, Huawei Y3, HTC Nexus 9, and Samsung Galaxy S4 mini) against digital camera (Sony DLSR-A900) and visual visibility observations. Visibility was assessed from photos using image entropy, from the number of visible targets, and from WiFi signal strength using RSSI. Results show clear relationships between object distance and fog density, yet a considerable spread across the different smartphone/ tablet units is evident.
Mask-to-wafer alignment system
Sweatt, William C.; Tichenor, Daniel A.; Haney, Steven J.
2003-11-04
A modified beam splitter that has a hole pattern that is symmetric in one axis and anti-symmetric in the other can be employed in a mask-to-wafer alignment device. The device is particularly suited for rough alignment using visible light. The modified beam splitter transmits and reflects light from a source of electromagnetic radiation and it includes a substrate that has a first surface facing the source of electromagnetic radiation and second surface that is reflective of said electromagnetic radiation. The substrate defines a hole pattern about a central line of the substrate. In operation, an input beam from a camera is directed toward the modified beam splitter and the light from the camera that passes through the holes illuminates the reticle on the wafer. The light beam from the camera also projects an image of a corresponding reticle pattern that is formed on the mask surface of the that is positioned downstream from the camera. Alignment can be accomplished by detecting the radiation that is reflected from the second surface of the modified beam splitter since the reflected radiation contains both the image of the pattern from the mask and a corresponding pattern on the wafer.
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Noisy Ocular Recognition Based on Three Convolutional Neural Networks.
Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung
2017-12-17
In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.
NASA Astrophysics Data System (ADS)
Fuentes-Fernández, J.; Cuevas, S.; Watson, A. M.
2018-04-01
We present the optical design of COATLI, a two channel visible imager for a comercial 50 cm robotic telescope. COATLI will deliver diffraction-limited images (approximately 0.3 arcsec FWHM) in the riz bands, inside a 4.2 arcmin field, and seeing limited images (approximately 0.6 arcsec FWHM) in the B and g bands, inside a 5 arcmin field, by means of a tip-tilt mirror for fast guiding, and a deformable mirror for active optics, both located on two optically transferred pupil planes. The optical design is based on two collimator-camera systems plus a pupil transfer relay, using achromatic doublets of CaF2 and S-FTM16 and one triplet of N-BK7 and CaF2. We discuss the effciency, tolerancing, thermal behavior and ghosts. COATLI will be installed at the Observatorio Astronómico Nacional in Sierra San Pedro Mártir, Baja California, Mexico, in 2018.
NASA Technical Reports Server (NTRS)
2006-01-01
This false-color composite image shows the Cartwheel galaxy as seen by the Galaxy Evolution Explorer's far ultraviolet detector (blue); the Hubble Space Telescope's wide field and planetary camera 2 in B-band visible light (green); the Spitzer Space Telescope's infrared array camera at 8 microns (red); and the Chandra X-ray Observatory's advanced CCD imaging spectrometer-S array instrument (purple). Approximately 100 million years ago, a smaller galaxy plunged through the heart of Cartwheel galaxy, creating ripples of brief star formation. In this image, the first ripple appears as an ultraviolet-bright blue outer ring. The blue outer ring is so powerful in the Galaxy Evolution Explorer observations that it indicates the Cartwheel is one of the most powerful UV-emitting galaxies in the nearby universe. The blue color reveals to astronomers that associations of stars 5 to 20 times as massive as our sun are forming in this region. The clumps of pink along the outer blue ring are regions where both X-rays and ultraviolet radiation are superimposed in the image. These X-ray point sources are very likely collections of binary star systems containing a blackhole (called massive X-ray binary systems). The X-ray sources seem to cluster around optical/ultraviolet-bright supermassive star clusters. The yellow-orange inner ring and nucleus at the center of the galaxy result from the combination of visible and infrared light, which is stronger towards the center. This region of the galaxy represents the second ripple, or ring wave, created in the collision, but has much less star formation activity than the first (outer) ring wave. The wisps of red spread throughout the interior of the galaxy are organic molecules that have been illuminated by nearby low-level star formation. Meanwhile, the tints of green are less massive, older visible-light stars. Although astronomers have not identified exactly which galaxy collided with the Cartwheel, two of three candidate galaxies can be seen in this image to the bottom left of the ring, one as a neon blob and the other as a green spiral. Previously, scientists believed the ring marked the outermost edge of the galaxy, but the latest GALEX observations detect a faint disk, not visible in this image, that extends to twice the diameter of the ring.2001-04-04
Like dancers pirouetting in opposite directions, the rotational patterns of two different tropical storms are contrasted in this pair of MISR nadir-camera images. The left-hand image is of Tropical Storm Bud, acquired on June 17, 2000 (Terra orbit 2656) as the storm was dissipating. Bud was situated in the eastern Pacific Ocean between Socorro Island and the southern tip of Baja California. South of the storm's center is a vortex pattern caused by obstruction of the prevailing flow by tiny Socorro Island. Sonora, Mexico and Baja California are visible at the top of the image. The right-hand image is of Tropical Cyclone Dera, acquired on March 12, 2001 (Terra orbit 6552). Dera was located in the Indian Ocean, south of Madagascar. The southern end of this large island is visible in the top portion of this image. Northern hemisphere tropical storms, like Bud, rotate in a counterclockwise direction, whereas those in the southern hemisphere, such as Dera, rotate clockwise. The opposite spins are a consequence of Earth's rotation. Each image covers a swath approximately 380 kilometers wide. http://photojournal.jpl.nasa.gov/catalog/PIA03400
NASA Astrophysics Data System (ADS)
Cabib, Dario; Lavi, Moshe; Gil, Amir; Milman, Uri
2011-06-01
Since the early '90's CI has been involved in the development of FTIR hyperspectral imagers based on a Sagnac or similar type of interferometer. CI also pioneered the commercialization of such hyperspectral imagers in those years. After having developed a visible version based on a CCD in the early '90's (taken on by a spin-off company for biomedical applications) and a 3 to 5 micron infrared version based on a cooled InSb camera in 2008, it is now developing an LWIR version based on an uncooled camera for the 8 to 14 microns range. In this paper we will present design features and expected performance of the system. The instrument is designed to be rugged for field use, yield a relatively high spectral resolution of 8 cm-1, an IFOV of 0.5 mrad., a 640x480 pixel spectral cube in less than a minute and a noise equivalent spectral radiance of 40 nW/cm2/sr/cm-1 at 10μ. The actually measured performance will be presented in a future paper.
NASA Astrophysics Data System (ADS)
Lu, Daren; Huo, Juan; Zhang, W.; Liu, J.
A series of satellite sensors in visible and infrared wavelengths have been successfully operated on board a number of research satellites, e.g. NOAA/AVHRR, the MODIS onboard Terra and Aqua, etc. A number of cloud and aerosol products are produced and released in recent years. However, the validation of the product quality and accuracy are still a challenge to the atmospheric remote sensing community. In this paper, we suggest a ground based validation scheme for satellite-derived cloud and aerosol products by using combined visible and thermal infrared all sky imaging observations as well as surface meteorological observations. In the scheme, a visible digital camera with a fish-eye lens is used to continuously monitor the all sky with the view angle greater than 180 deg. The digital camera system is calibrated for both its geometry and radiance (broad blue, green, and red band) so as to a retrieval method can be used to detect the clear and cloudy sky spatial distribution and their temporal variations. A calibrated scanning thermal infrared thermometer is used to monitor the all sky brightness temperature distribution. An algorithm is developed to detect the clear and cloudy sky as well as cloud base height by using sky brightness distribution and surface temperature and humidity as input. Based on these composite retrieval of clear and cloudy sky distribution, it can be used to validate the satellite retrievals in the sense of real-simultaneous comparison and statistics, respectively. What will be presented in this talk include the results of the field observations and comparisons completed in Beijing (40 deg N, 116.5 deg E) in year 2003 and 2004. This work is supported by NSFC grant No. 4002700, and MOST grant No 2001CCA02200
Two Moons and the Pleiades from Mars
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Inverted image of two moons and the Pleiades from Mars Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit recently settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. In this view, the Pleiades, a star cluster also known as the 'Seven Sisters,' is visible in the lower left corner. The bright star Aldebaran and some of the stars in the constellation Taurus are visible on the right. Spirit acquired this image the evening of martian day, or sol, 590 (Aug. 30, 2005). The image on the right provides an enhanced-contrast view with annotation. Within the enhanced halo of light is an insert of an unsaturated view of Phobos taken a few images later in the same sequence. On Mars, Phobos would be easily visible to the naked eye at night, but would be only about one-third as large as the full Moon appears from Earth. Astronauts staring at Phobos from the surface of Mars would notice its oblong, potato-like shape and that it moves quickly against the background stars. Phobos takes only 7 hours, 39 minutes to complete one orbit of Mars. That is so fast, relative to the 24-hour-and-39-minute sol on Mars (the length of time it takes for Mars to complete one rotation), that Phobos rises in the west and sets in the east. Earth's moon, by comparison, rises in the east and sets in the west. The smaller martian moon, Deimos, takes 30 hours, 12 minutes to complete one orbit of Mars. That orbital period is longer than a martian sol, and so Deimos rises, like most solar system moons, in the east and sets in the west. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this composite with the panoramic camera, using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.NASA Astrophysics Data System (ADS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-04-01
This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Tropical Depression 6 (Florence) in the Atlantic
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Microwave ImageVisible Light Image
These infrared, microwave, and visible images were created with data retrieved by the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite. Infrared Image Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). Microwave Image AIRS data used to create the microwave images come from the microwave radiation emitted by Earth's atmosphere which is then received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm, where there are either some clouds or no clouds, indicate where the sea surface shines through. Vis/NIR Image The AIRS instrument suite contains a sensor that captures light in the visible/near-infrared portion of the electromagnetic spectrum. These 'visible' images are similar to a snapshot taken with your camera. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.Fireball Observations in Visible and Sodium Bands
NASA Astrophysics Data System (ADS)
Fletcher, Sandra
On November 17th at 1:32am MST, a large Leonid fireball was simultaneously imaged by two experiments, a visible band CCD camera and a 590nm filtered band equi-angle fisheye and telecentric lens assembly. The visible band camera, ROTSE (Robotic Optical Transient Search Experiment) is a two by two f/1.9 telephoto lens array with 2k x2k Thompson CCD and is located at 35.87 N, 106.25 W at an altitude of 2115m. One-minute exposures along the radiant were taken of the event for 30 minutes after the initial explosion. The sodium band experiment was located at 35.29 N,106.46 W at an altitude of 1860m. It took ninety second exposures and captured several events throughout the night. Triangulation from two New Mexico sites resulted in an altitude of 83km over Wagon Mound, NM. Two observers present at the ROTSE site saw a green flash and a persistent glow up to seven minutes after the explosion. Cataloging of all sodium trails for comparison with lidar and infrasonic measurements is in progress. The raw data from both experiments and the atmospheric chemistry interpretation of them will be presented.
1970-01-01
This 1970 photograph shows the flight unit for Skylab's White Light Coronagraph, an Apollo Telescope Mount (ATM) facility that photographed the solar corona in the visible light spectrum. A TV camera in the instrument provided real-time pictures of the occulted Sun to the astronauts at the control console and also transmitted the images to the ground. The Marshall Space Flight Center had program management responsibility for the development of Skylab hardware and experiments.
MISSE PEC, on the ISS Airlock crewlock endcone
2001-08-17
STS105-E-5342 (17 August 2001) --- Backdropped by a sunrise, the newly installed Materials International Space Station Experiment (MISSE) is visible. The MISSE was installed on the outside of the Quest Airlock during the first extravehicular activity (EVA) of the STS-105 mission. MISSE will collect information on how different materials weather in the environment of space. This image was taken with a digital still camera.
Hubble Tracks Clouds on Uranus
NASA Technical Reports Server (NTRS)
1997-01-01
Taking its first peek at Uranus, NASA Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has detected six distinct clouds in images taken July 28,1997.
The image on the right, taken 90 minutes after the left-hand image, shows the planet's rotation. Each image is a composite of three near-infrared images. They are called false-color images because the human eye cannot detect infrared light. Therefore, colors corresponding to visible light were assigned to the images. (The wavelengths for the 'blue,' 'green,' and 'red' exposures are 1.1, 1.6, and 1.9 micrometers, respectively.)At visible and near-infrared light, sunlight is reflected from hazes and clouds in the atmosphere of Uranus. However, at near-infrared light, absorption by gases in the Uranian atmosphere limits the view to different altitudes, causing intense contrasts and colors.In these images, the blue exposure probes the deepest atmospheric levels. A blue color indicates clear atmospheric conditions, prevalent at mid-latitudes near the center of the disk. The green exposure is sensitive to absorption by methane gas, indicating a clear atmosphere; but in hazy atmospheric regions, the green color is seen because sunlight is reflected back before it is absorbed. The green color around the south pole (marked by '+') shows a strong local haze. The red exposure reveals absorption by hydrogen, the most abundant gas in the atmosphere of Uranus. Most sunlight shows patches of haze high in the atmosphere. A red color near the limb (edge) of the disk indicates the presence of a high-altitude haze. The purple color to the right of the equator also suggests haze high in the atmosphere with a clear atmosphere below.The five clouds visible near the right limb rotated counterclockwise during the time between both images. They reach high into the atmosphere, as indicated by their red color. Features of such high contrast have never been seen before on Uranus. The clouds are almost as large as continents on Earth, such as Europe. Another cloud (which barely can be seen) rotated along the path shown by the black arrow. It is located at lower altitudes, as indicated by its green color.The rings of Uranus are extremely faint in visible light but quite prominent in the near infrared. The brightest ring, the epsilon ring, has a variable width around its circumference. Its widest and thus brightest part is at the top in this image. Two fainter, inner rings are visible next to the epsilon ring.Eight of the 10 small Uranian satellites, discovered by Voyager 2, can be seen in both images. Their sizes range from about 25 miles (40 kilometers) for Bianca to 100 miles (150 kilometers) for Puck. The smallest of these satellites have not been detected since the departure of Voyager 2 from Uranus in 1986. These eight satellites revolve around Uranus in less than a day. The inner ones are faster than the outer ones. Their motion in the 90 minutes between both images is marked in the right panel. The area outside the rings was slightly enhanced in brightness to improve the visibility of these faint satellites.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/NASA Astrophysics Data System (ADS)
Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina
2018-01-01
Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.
NASA Technical Reports Server (NTRS)
Behar, Alberto; Carsey, Frank; Lane, Arthur; Engelhardt, Herman
2006-01-01
An instrumentation system has been developed for studying interactions between a glacier or ice sheet and the underlying rock and/or soil. Prior borehole imaging systems have been used in well-drilling and mineral-exploration applications and for studying relatively thin valley glaciers, but have not been used for studying thick ice sheets like those of Antarctica. The system includes a cylindrical imaging probe that is lowered into a hole that has been bored through the ice to the ice/bedrock interface by use of an established hot-water-jet technique. The images acquired by the cameras yield information on the movement of the ice relative to the bedrock and on visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-data-transmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At thime of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the adjacent water and ice.
Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele
2016-07-01
The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.
Dual-Modality PET/Ultrasound imaging of the Prostate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Jennifer S.; Moses, William W.; Pouliot, Jean
2005-11-11
Functional imaging with positron emission tomography (PET)will detect malignant tumors in the prostate and/or prostate bed, as well as possibly help determine tumor ''aggressiveness''. However, the relative uptake in a prostate tumor can be so great that few other anatomical landmarks are visible in a PET image. Ultrasound imaging with a transrectal probe provides anatomical detail in the prostate region that can be co-registered with the sensitive functional information from the PET imaging. Imaging the prostate with both PET and transrectal ultrasound (TRUS) will help determine the location of any cancer within the prostate region. This dual-modality imaging should helpmore » provide better detection and treatment of prostate cancer. LBNL has built a high performance positron emission tomograph optimized to image the prostate.Compared to a standard whole-body PET camera, our prostate-optimized PET camera has the same sensitivity and resolution, less backgrounds and lower cost. We plan to develop the hardware and software tools needed for a validated dual PET/TRUS prostate imaging system. We also plan to develop dual prostate imaging with PET and external transabdominal ultrasound, in case the TRUS system is too uncomfortable for some patients. We present the design and intended clinical uses for these dual imaging systems.« less
2017-07-28
Cassini gazed toward high southern latitudes near Saturn's south pole to observe ghostly curtains of dancing light -- Saturn's southern auroras, or southern lights. These natural light displays at the planet's poles are created by charged particles raining down into the upper atmosphere, making gases there glow. The dark area at the top of this scene is Saturn's night side. The auroras rotate from left to right, curving around the planet as Saturn rotates over about 70 minutes, compressed here into a movie sequence of about five seconds. Background stars are seen sliding behind the planet. Cassini was moving around Saturn during the observation, keeping its gaze fixed on a particular spot on the planet, which causes a shift in the distant background over the course of the observation. Some of the stars seem to make a slight turn to the right just before disappearing. This effect is due to refraction -- the starlight gets bent as it passes through the atmosphere, which acts as a lens. Random bright specks and streaks appearing from frame to frame are due to charged particles and cosmic rays hitting the camera detector. The aim of this observation was to observe seasonal changes in the brightness of Saturn's auroras, and to compare with the simultaneous observations made by Cassini's infrared and ultraviolet imaging spectrometers. The original images in this movie sequence have a size of 256x256 pixels; both the original size and a version enlarged to 500x500 pixels are available here. The small image size is the result of a setting on the camera that allows for shorter exposure times than full-size (1024x1024 pixel) images. This enabled Cassini to take more frames in a short time and still capture enough photons from the auroras for them to be visible. The images were taken in visible light using the Cassini spacecraft narrow-angle camera on July 20, 2017, at a distance of about 620,000 miles (1 million kilometers) from Saturn. The views look toward 74 degrees south latitude on Saturn. Image scale is about 0.9 mile (1.4 kilometers) per pixel on Saturn. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21623
Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
1997-07-07
Tracks made by the Sojourner rover are visible in this image, taken by one of the cameras aboard Sojourner on Sol 3. The tracks represent the rover maneuvering towards the rock dubbed "Barnacle Bill." The rover, having exited the lander via the rear ramp, first traveled towards the right portion of the image, and then moved forward towards the left where Barnacle Bill sits. The fact that the rover was making defined tracks indicates that the soil is made up of particles on a micron scale. http://photojournal.jpl.nasa.gov/catalog/PIA00633
NASA Technical Reports Server (NTRS)
1998-01-01
Mars Orbiter Camera (MOC) image of a 10 km by 12 km area of Coprates Chasma (14.7 degrees S, 55.8 degrees W), a ridge with a flat upper surface in the center of Coprates Chasma, which is part of the 6000-km-long Valles Marineris. Rock layers are visible just below the ridge. The gray scale (4.8 m/pixel) MOC image was combined with a Viking Orbiter color view of the same area. The faults of a graben offset beds on the slope to the left.
Figure caption from Science MagazineHubble Spots Northern Hemispheric Clouds on Uranus
NASA Technical Reports Server (NTRS)
1997-01-01
Using visible light, astronomers for the first time this century have detected clouds in the northern hemisphere of Uranus. The newest images, taken July 31 and Aug. 1, 1997 with NASA Hubble Space Telescope's Wide Field and Planetary Camera 2, show banded structure and multiple clouds. Using these images, Dr. Heidi Hammel (Massachusetts Institute of Technology) and colleagues Wes Lockwood (Lowell Observatory) and Kathy Rages (NASA Ames Research Center) plan to measure the wind speeds in the northern hemisphere for the first time.
Uranus is sometimes called the 'sideways' planet, because its rotation axis tipped more than 90 degrees from the planet's orbit around the Sun. The 'year' on Uranus lasts 84 Earth years, which creates extremely long seasons - winter in the northern hemisphere has lasted for nearly 20 years. Uranus has also been called bland and boring, because no clouds have been detectable in ground-based images of the planet. Even to the cameras of the Voyager spacecraft in 1986, Uranus presented a nearly uniform blank disk, and discrete clouds were detectable only in the southern hemisphere. Voyager flew over the planet's cloud tops near the dead of northern winter (when the northern hemisphere was completely shrouded in darkness).Spring has finally come to the northern hemisphere of Uranus. The newest images, both the visible-wavelength ones described here and those taken a few days earlier with the Near Infrared and Multi-Object Spectrometer (NICMOS) by Erich Karkoschka (University of Arizona), show a planet with banded structure and detectable clouds.Two images are shown here. The 'aqua' image (on the left) is taken at 5,470 Angstroms, which is near the human eye's peak response to wavelength. Color has been added to the image to show what a person on a spacecraft near Uranus might see. Little structure is evident at this wavelength, though with image-processing techniques, a small cloud can be seen near the planet's northern limb (rightmost edge). The 'red' image (on the right) is taken at 6,190 Angstroms, and is sensitive to absorption by methane molecules in the planet's atmosphere. The banded structure of Uranus is evident, and the small cloud near the northern limb is now visible.Scientists are expecting that the discrete clouds and banded structure may become even more pronounced as Uranus continues in its slow pace around the Sun. 'Some parts of Uranus haven't seen the Sun in decades,' says Dr. Hammel, 'and historical records suggest that we may see the development of more banded structure and patchy clouds as the planet's year progresses.'Some scientists have speculated that the winds of Uranus are not symmetric around the planet's equator, but no clouds were visible to test those theories. The new data will provide the opportunity to measure the northern winds. Hammel and colleagues expect to have results soon.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http:// oposite.stsci.edu/pubinfo/Infrared and visible cooperative vehicle identification markings
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.; Raven, Peter N.
2006-05-01
Airborne surveillance helicopters and aeroplanes used by security and defence forces around the world increasingly rely on their visible band and thermal infrared cameras to prosecute operations such as the co-ordination of police vehicles during the apprehension of a stolen car, or direction of all emergency services at a serious rail crash. To perform their function effectively, it is necessary for the airborne officers to unambiguously identify police and the other emergency service vehicles. In the visible band, identification is achieved by placing high contrast symbols and characters on the vehicle roof. However, at the wavelengths at which thermal imagers operate, the dark and light coloured materials have similar low reflectivity and the visible markings cannot be discerned. Hence there is a requirement for a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared, over a large range of viewing angles. In this paper we discuss the design, detailed angle-dependent spectroscopic characterisation and operation of novel visible and infrared vehicle marking materials, and present airborne IR and visible imagery of materials in use.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2 This image composite compares infrared and visible views of the famous Orion nebula and its surrounding cloud, an industrious star-making region located near the hunter constellation's sword. The infrared picture is from NASA's Spitzer Space Telescope, and the visible image is from the National Optical Astronomy Observatory, headquartered in Tucson, Ariz. In addition to Orion, two other nebulas can be seen in both pictures. The Orion nebula, or M42, is the largest and takes up the lower half of the images; the small nebula to the upper left of Orion is called M43; and the medium-sized nebula at the top is NGC 1977. Each nebula is marked by a ring of dust that stands out in the infrared view. These rings make up the walls of cavities that are being excavated by radiation and winds from massive stars. The visible view of the nebulas shows gas heated by ultraviolet radiation from the massive stars. Above the Orion nebula, where the massive stars have not yet ejected much of the obscuring dust, the visible image appears dark with only a faint glow. In contrast, the infrared view penetrates the dark lanes of dust, revealing bright swirling clouds and numerous developing stars that have shot out jets of gas (green). This is because infrared light can travel through dust, whereas visible light is stopped short by it. The infrared image shows light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight.DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomanowski, B. A., E-mail: b.a.lomanowski@durham.ac.uk; Sharples, R. M.; Meigs, A. G.
2014-11-15
The mirror-linked divertor spectroscopy diagnostic on JET has been upgraded with a new visible and near-infrared grating and filtered spectroscopy system. New capabilities include extended near-infrared coverage up to 1875 nm, capturing the hydrogen Paschen series, as well as a 2 kHz frame rate filtered imaging camera system for fast measurements of impurity (Be II) and deuterium Dα, Dβ, Dγ line emission in the outer divertor. The expanded system provides unique capabilities for studying spatially resolved divertor plasma dynamics at near-ELM resolved timescales as well as a test bed for feasibility assessment of near-infrared spectroscopy.
Medium resolution spectra of the shuttle glow in the visible region of the spectrum
NASA Technical Reports Server (NTRS)
Viereck, R. A.; Murad, E.; Pike, C. P.; Mende, S. B.; Swenson, G. R.; Culbertson, F. L.; Springer, B. C.
1992-01-01
Recent spectral measurements of the visible shuttle glow (lambda = 400 - 800 nm) at medium resolution (1 nm) reveal the same featureless continuum with a maximum near 680 nm that was reported previously. This is also in good agreement with recent laboratory experiments that attribute the glow to the emissions of NO2 formed by the recombination of O + NO. The data that are presented were taken from the aft flight deck with a hand-held spectrograph and from the shuttle bay with a low-light-level television camera. Shuttle glow images and spectra are presented and compared with laboratory data and theory.
Far-ultraviolet stellar photometry: A field in Orion
NASA Astrophysics Data System (ADS)
Schmidt, Edward G.; Carruthers, George R.
1993-12-01
Far-ultraviolet photometry for 625 objects in Orion is presented. These data were extracted from electrographic camera images obtained during sounding rocket flights in 1975 and 1982. The 1975 images were centered close to the belt of Orion while the 1982 images were centered approximately 9 deg further north. One hundred and fifty stars fell in the overlapping region and were observed with both cameras. Sixty-eight percent of the objects were tentatively identified with known stars using the SIMBAD database while another 24% are blends of objects too close together to separate with our resolution. As in previous studies, the majority of the identified ultraviolet sources are early-type stars. However, there are a significant number for which no such identification was possible, and we suggest that these are interesting objects which should be further investigated. Seven stars were found which were bright in the ultraviolet but faint in the visible. We suggest that some of these are nearby white dwarfs.
Schlieren imaging of loud sounds and weak shock waves in air near the limit of visibility
NASA Astrophysics Data System (ADS)
Hargather, Michael John; Settles, Gary S.; Madalis, Matthew J.
2010-02-01
A large schlieren system with exceptional sensitivity and a high-speed digital camera are used to visualize loud sounds and a variety of common phenomena that produce weak shock waves in the atmosphere. Frame rates varied from 10,000 to 30,000 frames/s with microsecond frame exposures. Sound waves become visible to this instrumentation at frequencies above 10 kHz and sound pressure levels in the 110 dB (6.3 Pa) range and above. The density gradient produced by a weak shock wave is examined and found to depend upon the profile and thickness of the shock as well as the density difference across it. Schlieren visualizations of weak shock waves from common phenomena include loud trumpet notes, various impact phenomena that compress a bubble of air, bursting a toy balloon, popping a champagne cork, snapping a wooden stick, and snapping a wet towel. The balloon burst, snapping a ruler on a table, and snapping the towel and a leather belt all produced readily visible shock-wave phenomena. In contrast, clapping the hands, snapping the stick, and the champagne cork all produced wave trains that were near the weak limit of visibility. Overall, with sensitive optics and a modern high-speed camera, many nonlinear acoustic phenomena in the air can be observed and studied.
Photogrammetric mobile satellite service prediction
NASA Technical Reports Server (NTRS)
Akturan, Riza; Vogel, Wolfhard J.
1994-01-01
Photographic images of the sky were taken with a camera through a fisheye lens with a 180 deg field-of-view. The images of rural, suburban, and urban scenes were analyzed on a computer to derive quantitative information about the elevation angles at which the sky becomes visible. Such knowledge is needed by designers of mobile and personal satellite communications systems and is desired by customers of these systems. The 90th percentile elevation angle of the skyline was found to be 10 deg, 17 deg, and 51 deg in the three environments. At 8 deg, 75 percent, 75 percent, and 35 percent of the sky was visible, respectively. The elevation autocorrelation fell to zero with a 72 deg lag in the rural and urban environment and a 40 deg lag in the suburb. Mean estimation errors are below 4 deg.
NASA Astrophysics Data System (ADS)
Arvidson, R. E.; Squyres, S. W.; Baumgartner, E. T.; Schenker, P. S.; Niebur, C. S.; Larsen, K. W.; SeelosIV, F. P.; Snider, N. O.; Jolliff, B. L.
2002-08-01
The Field Integration Design and Operations (FIDO) prototype Mars rover was deployed and operated remotely for 2 weeks in May 2000 in the Black Rock Summit area of Nevada. The blind science operation trials were designed to evaluate the extent to which FIDO-class rovers can be used to conduct traverse science and collect samples. FIDO-based instruments included stereo cameras for navigation and imaging, an infrared point spectrometer, a color microscopic imager for characterization of rocks and soils, and a rock drill for core acquisition. Body-mounted ``belly'' cameras aided drill deployment, and front and rear hazard cameras enabled terrain hazard avoidance. Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, a high spatial resolution IKONOS orbital image, and a suite of descent images were used to provide regional- and local-scale terrain and rock type information, from which hypotheses were developed for testing during operations. The rover visited three sites, traversed 30 m, and acquired 1.3 gigabytes of data. The relatively small traverse distance resulted from a geologically rich site in which materials identified on a regional scale from remote-sensing data could be identified on a local scale using rover-based data. Results demonstrate the synergy of mapping terrain from orbit and during descent using imaging and spectroscopy, followed by a rover mission to test inferences and to make discoveries that can be accomplished only with surface mobility systems.
Portable widefield imaging device for ICG-detection of the sentinel lymph node
NASA Astrophysics Data System (ADS)
Govone, Angelo Biasi; Gómez-García, Pablo Aurelio; Carvalho, André Lopes; Capuzzo, Renato de Castro; Magalhães, Daniel Varela; Kurachi, Cristina
2015-06-01
Metastasis is one of the major cancer complications, since the malignant cells detach from the primary tumor and reaches other organs or tissues. The sentinel lymph node (SLN) is the first lymphatic structure to be affected by the malignant cells, but its location is still a great challenge for the medical team. This occurs due to the fact that the lymph nodes are located between the muscle fibers, making it visualization difficult. Seeking to aid the surgeon in the detection of the SLN, the present study aims to develop a widefield fluorescence imaging device using the indocyanine green as fluorescence marker. The system is basically composed of a 780nm illumination unit, optical components for 810nm fluorescence detection, two CCD cameras, a laptop, and dedicated software. The illumination unit has 16 diode lasers. A dichroic mirror and bandpass filters select and deliver the excitation light to the interrogated tissue, and select and deliver the fluorescence light to the camera. One camera is responsible for the acquisition of visible light and the other one for the acquisition of the ICG fluorescence. The software developed at the LabVIEW® platform generates a real time merged image where it is possible to observe the fluorescence spots, related to the lymph nodes, superimposed at the image under white light. The system was tested in a mice model, and a first patient with tongue cancer was imaged. Both results showed the potential use of the presented fluorescence imaging system assembled for sentinel lymph node detection.
Accurate attitude determination of the LACE satellite
NASA Technical Reports Server (NTRS)
Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.
1993-01-01
The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
NASA Technical Reports Server (NTRS)
1989-01-01
This pair of Voyager 2 images (FDS 11446.21 and 11448.10), two 591-s exposures obtained through the clear filter of the wide angle camera, show the full ring system with the highest sensitivity. Visible in this figure are the bright, narrow N53 and N63 rings, the diffuse N42 ring, and (faintly) the plateau outside of the N53 ring (with its slight brightening near 57,500 km).
A Well-Traveled 'Eagle Crater' (left-eye)
NASA Technical Reports Server (NTRS)
2004-01-01
This is the left-eye version of the Mars Exploration Rover Opportunity's view on its 56th sol on Mars, before it left its landing-site crater. To the right, the rover tracks are visible at the original spot where the rover attempted unsuccessfully to exit the crater. After a one-sol delay, Opportunity took another route to the plains of Meridiani Planum. This image was taken by the rover's navigation camera.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
NASA Technical Reports Server (NTRS)
1978-01-01
NASA remote sensing technology is being employed in archeological studies of the Anasazi Indians, who lived in New Mexico one thousand years ago. Under contract with the National Park Service, NASA's Technology Applications Center at the University of New Mexico is interpreting multispectral scanner data and demonstrating how aerospace scanning techniques can uncover features of prehistoric ruins not visible in conventional aerial photographs. The Center's initial study focused on Chaco Canyon, a pre-Columbia Anasazi site in northeastern New Mexico. Chaco Canyon is a national monument and it has been well explored on the ground and by aerial photography. But the National Park Service was interested in the potential of multispectral scanning for producing evidence of prehistoric roads, field patterns and dwelling areas not discernible in aerial photographs. The multispectral scanner produces imaging data in the invisible as well as the visible portions of the spectrum. This data is converted to pictures which bring out features not visible to the naked eye or to cameras. The Technology Applications Center joined forces with Bendix Aerospace Systems Division, Ann Arbor, Michigan, which provided a scanner-equipped airplane for mapping the Chaco Canyon area. The NASA group processed the scanner images and employed computerized image enhancement techniques to bring out additional detail.
NASA Astrophysics Data System (ADS)
Ehrhart, Matthias; Lienhart, Werner
2017-09-01
The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.
Peering through the flames: imaging techniques for reacting aluminum powders
Zepper, Ethan T.; Pantoya, Michelle L.; Bhattacharya, Sukalyan; ...
2017-03-17
Combusting metals burn at high temperatures and emit high-intensity radiation in the visible spectrum which can over-saturate regular imaging sensors and obscure the field of view. Filtering the luminescence can result in limited information and hinder thorough combustion characterization. A method for “seeing through the flames” of a highly luminescent aluminum powder reaction is presented using copper vapor laser (CVL) illumination synchronized with a high-speed camera. A statistical comparison of combusting aluminum particle agglomerate between filtered halogen and CVL illumination shows the effectiveness of this diagnostic approach. When ignited by an electrically induced plasma, aluminum particles are entrained as solidmore » agglomerates that rotate about their centers of mass and are surrounded by emitted, burning gases. Furthermore, the average agglomerate diameter appears to be 160 micrometers when viewed with standard illumination and a high-speed camera. But, a significantly lower diameter of 50 micrometers is recorded when imaged with CVL illumination. Our results advocate that alternative imaging techniques are required to resolve the complexities of metal particle combustion.« less
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
Near-Infrared Hyperspectral Image Cubes of Mars during the 1999 Opposition
NASA Technical Reports Server (NTRS)
Hillman, John J.; Glenar, D.; Espenak, F.; Chanover, N.; Murphy, J.; Young, L.; Blass, W.
1999-01-01
We used the Goddard Space Flight Center, Acousto-Optic Tunable Filter (AOTF) Camera to obtain near-IR spectral image sets of Mars over the 1.6-3.6 micron region during the April 1999 opposition. A complete image set consists of 280 images with a spectral full-width-half maximum of 10 wavenumbers (fixed in frequency), 90 images in H-band (1.55-1.80 micron), 115 images in K-band (1.95-2.50 micron) and 75 images in L-band (2.90-3.70 micron). The short-wavelength limit is set by transmission of AOTF cell and long-wavelength limit is imposed by sensitivity of PICNIC, 256x256, HgCdTe array detector. We will discuss the new array performance and provide preliminary interpretations of some of these results. These measurements were part of a 4-observatory coordinated effort whose overall objective was to assemble a photometrically calibrated, spectrally complete ground-based image cube over the visible and near-IR spectral region. To accomplish this, four observing teams conducted the investigations with instruments spanning 0.4 to 5.0 micron. The instruments and observing facilities were (a) AOTF camera at Apache Point Observatory, 3.5m, f/10, Nasymth focus (this abstract). Primary science targets included the 3 micron water-of hydration feature and CO2, H2O ice (polar regions and clouds); (b) Visible/NIR interference-filter (24 filters) camera at Lowell Observatory, 72" telescope. 430-1050 nm. Science targets were Fe(2+), Fe(3+) mineralogy and coarse grain hematite search; (c) NMSU Tortugas Mountain Observatory, 60 cm telescope, CCD photometry with same filter set as Lowell; (d) KPNO cryogenic grating/slit spectrometer (CRSP/SALLY) at KPNO 2.1 m, f/15 Cassegrain focus (see abstract by D. Glenar, et. al., this meeting). Selected wavelengths in 3-5 micron region (L, M band). Science targets included water-of-hydration feature (3-4 micron long wave extension) and sulfate mineralogy. Observers participating in this campaign included Dave Glenar, John Hillman, Gordon Bjoraker and Fred Espenak from GSFC, Nancy Chanover, Jim Murphy and A. S. MurTell from NMSU, Leslie Young from BU, Diana Blaney from JPL and Dick Joyce from KPNO.
Overview of diagnostic implementation on Proto-MPEX at ORNL
NASA Astrophysics Data System (ADS)
Biewer, T. M.; Bigelow, T.; Caughman, J. B. O.; Fehling, D.; Goulding, R. H.; Gray, T. K.; Isler, R. C.; Martin, E. H.; Meitner, S.; Rapp, J.; Unterberg, E. A.; Dhaliwal, R. S.; Donovan, D.; Kafle, N.; Ray, H.; Shaw, G. C.; Showers, M.; Mosby, R.; Skeen, C.
2015-11-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) recently began operating with an expanded diagnostic set. Approximately 100 sightlines have been established, delivering the plasma light emission to a ``patch panel'' in the diagnostic room for distribution to a variety of instruments: narrow-band filter spectroscopy, Doppler spectroscopy, laser induced breakdown spectroscopy, optical emission spectroscopy, and Thomson scattering. Additional diagnostic systems include: IR camera imaging, in-vessel thermocouples, ex-vessel fluoroptic probes, fast pressure gauges, visible camera imaging, microwave interferometry, a retarding-field energy analyzer, rf-compensated and ``double'' Langmuir probes, and B-dot probes. A data collection and archival system has been initiated using the MDSplus format. This effort capitalizes on a combination of new and legacy diagnostic hardware at ORNL and was accomplished largely through student labor. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
High-Speed Imaging Optical Pyrometry for Study of Boron Nitride Nanotube Generation
NASA Technical Reports Server (NTRS)
Inman, Jennifer A.; Danehy, Paul M.; Jones, Stephen B.; Lee, Joseph W.
2014-01-01
A high-speed imaging optical pyrometry system is designed for making in-situ measurements of boron temperature during the boron nitride nanotube synthesis process. Spectrometer measurements show molten boron emission to be essentially graybody in nature, lacking spectral emission fine structure over the visible range of the electromagnetic spectrum. Camera calibration experiments are performed and compared with theoretical calculations to quantitatively establish the relationship between observed signal intensity and temperature. The one-color pyrometry technique described herein involves measuring temperature based upon the absolute signal intensity observed through a narrowband spectral filter, while the two-color technique uses the ratio of the signals through two spectrally separated filters. The present study calibrated both the one- and two-color techniques at temperatures between 1,173 K and 1,591 K using a pco.dimax HD CMOS-based camera along with three such filters having transmission peaks near 550 nm, 632.8 nm, and 800 nm.
The Far Ultra-Violet Imager on the Icon Mission
NASA Astrophysics Data System (ADS)
Mende, S. B.; Frey, H. U.; Rider, K.; Chou, C.; Harris, S. E.; Siegmund, O. H. W.; England, S. L.; Wilkins, C.; Craig, W.; Immel, T. J.; Turin, P.; Darling, N.; Loicq, J.; Blain, P.; Syrstad, E.; Thompson, B.; Burt, R.; Champagne, J.; Sevilla, P.; Ellis, S.
2017-10-01
ICON Far UltraViolet (FUV) imager contributes to the ICON science objectives by providing remote sensing measurements of the daytime and nighttime atmosphere/ionosphere. During sunlit atmospheric conditions, ICON FUV images the limb altitude profile in the shortwave (SW) band at 135.6 nm and the longwave (LW) band at 157 nm perpendicular to the satellite motion to retrieve the atmospheric O/N2 ratio. In conditions of atmospheric darkness, ICON FUV measures the 135.6 nm recombination emission of O+ ions used to compute the nighttime ionospheric altitude distribution. ICON Far UltraViolet (FUV) imager is a Czerny-Turner design Spectrographic Imager with two exit slits and corresponding back imager cameras that produce two independent images in separate wavelength bands on two detectors. All observations will be processed as limb altitude profiles. In addition, the ionospheric 135.6 nm data will be processed as longitude and latitude spatial maps to obtain images of ion distributions around regions of equatorial spread F. The ICON FUV optic axis is pointed 20 degrees below local horizontal and has a steering mirror that allows the field of view to be steered up to 30 degrees forward and aft, to keep the local magnetic meridian in the field of view. The detectors are micro channel plate (MCP) intensified FUV tubes with the phosphor fiber-optically coupled to Charge Coupled Devices (CCDs). The dual stack MCP-s amplify the photoelectron signals to overcome the CCD noise and the rapidly scanned frames are co-added to digitally create 12-second integrated images. Digital on-board signal processing is used to compensate for geometric distortion and satellite motion and to achieve data compression. The instrument was originally aligned in visible light by using a special grating and visible cameras. Final alignment, functional and environmental testing and calibration were performed in a large vacuum chamber with a UV source. The test and calibration program showed that ICON FUV meets its design requirements and is ready to be launched on the ICON spacecraft.
InfraCAM (trade mark): A Hand-Held Commercial Infrared Camera Modified for Spaceborne Applications
NASA Technical Reports Server (NTRS)
Manitakos, Daniel; Jones, Jeffrey; Melikian, Simon
1996-01-01
In 1994, Inframetrics introduced the InfraCAM(TM), a high resolution hand-held thermal imager. As the world's smallest, lightest and lowest power PtSi based infrared camera, the InfraCAM is ideal for a wise range of industrial, non destructive testing, surveillance and scientific applications. In addition to numerous commercial applications, the light weight and low power consumption of the InfraCAM make it extremely valuable for adaptation to space borne applications. Consequently, the InfraCAM has been selected by NASA Lewis Research Center (LeRC) in Cleveland, Ohio, for use as part of the DARTFire (Diffusive and Radiative Transport in Fires) space borne experiment. In this experiment, a solid fuel is ignited in a low gravity environment. The combustion period is recorded by both visible and infrared cameras. The infrared camera measures the emission from polymethyl methacrylate, (PMMA) and combustion products in six distinct narrow spectral bands. Four cameras successfully completed all qualification tests at Inframetrics and at NASA Lewis. They are presently being used for ground based testing in preparation for space flight in the fall of 1995.
Performance evaluation of a quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Burcher, E. E.; Huck, F. O.; Wall, S. D.; Woehrle, S. B.
1977-01-01
Spatial resolutions achieved with cameras on lunar and planetary landers have been limited to about 1 mm, whereas microscopes of the type proposed for such landers could have obtained resolutions of about 1 um but were never accepted because of their complexity and weight. The quasi-microscope evaluated in this paper could provide intermediate resolutions of about 10 um with relatively simple optics that would augment a camera, such as the Viking lander camera, without imposing special design requirements on the camera of limiting its field of view of the terrain. Images of natural particulate samples taken in black and white and in color show that grain size, shape, and texture are made visible for unconsolidated materials in a 50- to 500-um size range. Such information may provide broad outlines of planetary surface mineralogy and allow inferences to be made of grain origin and evolution. The mineralogical descriptions of single grains would be aided by the reflectance spectra that could, for example, be estimated from the six-channel multispectral data of the Viking lander camera.
AO WFS detector developments at ESO to prepare for the E-ELT
NASA Astrophysics Data System (ADS)
Downing, Mark; Casali, Mark; Finger, Gert; Lewis, Steffan; Marchetti, Enrico; Mehrgan, Leander; Ramsay, Suzanne; Reyes, Javier
2016-07-01
ESO has a very active on-going AO WFS detector development program to not only meet the needs of the current crop of instruments for the VLT, but also has been actively involved in gathering requirements, planning, and developing detectors and controllers/cameras for the instruments in design and being proposed for the E-ELT. This paper provides an overall summary of the AO WFS Detector requirements of the E-ELT instruments currently in design and telescope focal units. This is followed by a description of the many interesting detector, controller, and camera developments underway at ESO to meet these needs; a) the rationale behind and plan to upgrade the 240x240 pixels, 2000fps, "zero noise", L3Vision CCD220 sensor based AONGC camera; b) status of the LGSD/NGSD High QE, 3e- RoN, fast 700fps, 1760x1680 pixels, Visible CMOS Imager and camera development; c) status of and development plans for the Selex SAPHIRA NIR eAPD and controller. Most of the instruments and detector/camera developments are described in more detail in other papers at this conference.
NASA Astrophysics Data System (ADS)
Ewerlöf, Maria; Larsson, Marcus; Salerud, E. Göran
2017-02-01
Hyperspectral imaging (HSI) can estimate the spatial distribution of skin blood oxygenation, using visible to near-infrared light. HSI oximeters often use a liquid-crystal tunable filter, an acousto-optic tunable filter or mechanically adjustable filter wheels, which has too long response/switching times to monitor tissue hemodynamics. This work aims to evaluate a multispectral snapshot imaging system to estimate skin blood volume and oxygen saturation with high temporal and spatial resolution. We use a snapshot imager, the xiSpec camera (MQ022HG-IM-SM4X4-VIS, XIMEA), having 16 wavelength-specific Fabry-Perot filters overlaid on the custom CMOS-chip. The spectral distribution of the bands is however substantially overlapping, which needs to be taken into account for an accurate analysis. An inverse Monte Carlo analysis is performed using a two-layered skin tissue model, defined by epidermal thickness, haemoglobin concentration and oxygen saturation, melanin concentration and spectrally dependent reduced-scattering coefficient, all parameters relevant for human skin. The analysis takes into account the spectral detector response of the xiSpec camera. At each spatial location in the field-of-view, we compare the simulated output to the detected diffusively backscattered spectra to find the best fit. The imager is evaluated for spatial and temporal variations during arterial and venous occlusion protocols applied to the forearm. Estimated blood volume changes and oxygenation maps at 512x272 pixels show values that are comparable to reference measurements performed in contact with the skin tissue. We conclude that the snapshot xiSpec camera, paired with an inverse Monte Carlo algorithm, permits us to use this sensor for spatial and temporal measurement of varying physiological parameters, such as skin tissue blood volume and oxygenation.
WindCam and MSPI: two cloud and aerosol instrument concepts derived from Terra/MISR heritage
NASA Astrophysics Data System (ADS)
Diner, David J.; Mischna, Michael; Chipman, Russell A.; Davis, Ab; Cairns, Brian; Davies, Roger; Kahn, Ralph A.; Muller, Jan-Peter; Torres, Omar
2008-08-01
The Multi-angle Imaging SpectroRadiometer (MISR) has been acquiring global cloud and aerosol data from polar orbit since February 2000. MISR acquires moderately high-resolution imagery at nine view angles from nadir to 70.5°, in four visible/near-infrared spectral bands. Stereoscopic parallax, time lapse among the nine views, and the variation of radiance with angle and wavelength enable retrieval of geometric cloud and aerosol plume heights, height-resolved cloud-tracked winds, and aerosol optical depth and particle property information. Two instrument concepts based upon MISR heritage are in development. The Cloud Motion Vector Camera, or WindCam, is a simplified version comprised of a lightweight, compact, wide-angle camera to acquire multiangle stereo imagery at a single visible wavelength. A constellation of three WindCam instruments in polar Earth orbit would obtain height-resolved cloud-motion winds with daily global coverage, making it a low-cost complement to a spaceborne lidar wind measurement system. The Multiangle SpectroPolarimetric Imager (MSPI) is aimed at aerosol and cloud microphysical properties, and is a candidate for the National Research Council Decadal Survey's Aerosol-Cloud-Ecosystem (ACE) mission. MSPI combines the capabilities of MISR with those of other aerosol sensors, extending the spectral coverage to the ultraviolet and shortwave infrared and incorporating high-accuracy polarimetric imaging. Based on requirements for the nonimaging Aerosol Polarimeter Sensor on NASA's Glory mission, a degree of linear polarization uncertainty of 0.5% is specified within a subset of the MSPI bands. We are developing a polarization imaging approach using photoelastic modulators (PEMs) to accomplish this objective.
NASA Technical Reports Server (NTRS)
2001-01-01
Surface brightness contrasts accentuated by a thin layer of snow enable a network of rivers, roads, and farmland boundaries to stand out clearly in these MISR images of southeastern Saskatchewan and southwestern Manitoba. The lefthand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The righthand image is a multi-angle false-color view made from the red band data of the 60-degree aftward camera, the nadir camera, and the 60-degree forward camera. In each image, the selected channels are displayed as red, green, and blue, respectively. The data were acquired April 17, 2001 during Terra orbit 7083, and cover an area measuring about 285 kilometers x 400 kilometers. North is at the top.
The junction of the Assiniboine and Qu'Apelle Rivers in the bottom part of the images is just east of the Saskatchewan-Manitoba border. During the growing season, the rich, fertile soils in this area support numerous fields of wheat, canola, barley, flaxseed, and rye. Beef cattle are raised in fenced pastures. To the north, the terrain becomes more rocky and forested. Many frozen lakes are visible as white patches in the top right. The narrow linear, north-south trending patterns about a third of the way down from the upper right corner are snow-filled depressions alternating with vegetated ridges, most probably carved by glacial flow.In the lefthand image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the righthand image, several forested regions are clearly visible in green hues. Since this is a multi-angle composite, the green arises not from the color of the leaves but from the architecture of the surface cover. Progressing southeastward along the Manitoba Escarpment, the forested areas include the Pasquia Hills, the Porcupine Hills, Duck Mountain Provincial Park, and Riding Mountain National Park. The forests are brighter in the nadir than at the oblique angles, probably because more of the snow-covered surface is visible in the gaps between the trees. In contrast, the valley between the Pasquia and Porcupine Hills near the top of the images appears bright red in the lefthand image (indicating high vegetation abundance) but shows a mauve color in the multi-angle view. This means that it is darker in the nadir than at the oblique angles. Examination of imagery acquired after the snow has melted should establish whether this difference is related to the amount of snow on the surface or is indicative of a different type of vegetation structure.Saskatchewan and Manitoba are believed to derive their names from the Cree words for the winding and swift-flowing waters of the Saskatchewan River and for a narrows on Lake Manitoba where the roaring sound of wind and water evoked the voice of the Great Spirit. They are two of Canada's Prairie Provinces; Alberta is the third.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.The Two-faced Whirlpool Galaxy
2017-12-08
NASA image release January 13, 2011 These images by NASA's Hubble Space Telescope show off two dramatically different face-on views of the spiral galaxy M51, dubbed the Whirlpool Galaxy. The image here, taken in visible light, highlights the attributes of a typical spiral galaxy, including graceful, curving arms, pink star-forming regions, and brilliant blue strands of star clusters. In the image above, most of the starlight has been removed, revealing the Whirlpool's skeletal dust structure, as seen in near-infrared light. This new image is the sharpest view of the dense dust in M51. The narrow lanes of dust revealed by Hubble reflect the galaxy's moniker, the Whirlpool Galaxy, as if they were swirling toward the galaxy's core. To map the galaxy's dust structure, researchers collected the galaxy's starlight by combining images taken in visible and near-infrared light. The visible-light image captured only some of the light; the rest was obscured by dust. The near-infrared view, however, revealed more starlight because near-infrared light penetrates dust. The researchers then subtracted the total amount of starlight from both images to see the galaxy's dust structure. The red color in the near-infrared image traces the dust, which is punctuated by hundreds of tiny clumps of stars, each about 65 light-years wide. These stars have never been seen before. The star clusters cannot be seen in visible light because dense dust enshrouds them. The image reveals details as small as 35 light-years across. Astronomers expected to see large dust clouds, ranging from about 100 light-years to more than 300 light-years wide. Instead, most of the dust is tied up in smooth and diffuse dust lanes. An encounter with another galaxy may have prevented giant clouds from forming. Probing a galaxy's dust structure serves as an important diagnostic tool for astronomers, providing invaluable information on how the gas and dust collapse to form stars. Although Hubble is providing incisive views of the internal structure of galaxies such as M51, the planned James Webb Space Telescope (JWST) is expected to produce even crisper images. Researchers constructed the image by combining visible-light exposures from Jan. 18 to 22, 2005, with the Advanced Camera for Surveys (ACS), and near-infrared light pictures taken in December 2005 with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook Credit: NASA, ESA, M. Regan and B. Whitmore (STScI), and R. Chandar (University of Toledo)
The Two-faced Whirlpool Galaxy
2011-01-13
NASA image release January 13, 2011 These images by NASA's Hubble Space Telescope show off two dramatically different face-on views of the spiral galaxy M51, dubbed the Whirlpool Galaxy. The image above, taken in visible light, highlights the attributes of a typical spiral galaxy, including graceful, curving arms, pink star-forming regions, and brilliant blue strands of star clusters. In the image here, most of the starlight has been removed, revealing the Whirlpool's skeletal dust structure, as seen in near-infrared light. This new image is the sharpest view of the dense dust in M51. The narrow lanes of dust revealed by Hubble reflect the galaxy's moniker, the Whirlpool Galaxy, as if they were swirling toward the galaxy's core. To map the galaxy's dust structure, researchers collected the galaxy's starlight by combining images taken in visible and near-infrared light. The visible-light image captured only some of the light; the rest was obscured by dust. The near-infrared view, however, revealed more starlight because near-infrared light penetrates dust. The researchers then subtracted the total amount of starlight from both images to see the galaxy's dust structure. The red color in the near-infrared image traces the dust, which is punctuated by hundreds of tiny clumps of stars, each about 65 light-years wide. These stars have never been seen before. The star clusters cannot be seen in visible light because dense dust enshrouds them. The image reveals details as small as 35 light-years across. Astronomers expected to see large dust clouds, ranging from about 100 light-years to more than 300 light-years wide. Instead, most of the dust is tied up in smooth and diffuse dust lanes. An encounter with another galaxy may have prevented giant clouds from forming. Probing a galaxy's dust structure serves as an important diagnostic tool for astronomers, providing invaluable information on how the gas and dust collapse to form stars. Although Hubble is providing incisive views of the internal structure of galaxies such as M51, the planned James Webb Space Telescope (JWST) is expected to produce even crisper images. Researchers constructed the image by combining visible-light exposures from Jan. 18 to 22, 2005, with the Advanced Camera for Surveys (ACS), and near-infrared light pictures taken in December 2005 with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Credit: NASA, ESA, S. Beckwith (STScI), and the Hubble Heritage Team (STScI/AURA) The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
NASA Astrophysics Data System (ADS)
Liu, Z. X.; Xu, X. Q.; Gao, X.; Xia, T. Y.; Joseph, I.; Meyer, W. H.; Liu, S. C.; Xu, G. S.; Shao, L. M.; Ding, S. Y.; Li, G. Q.; Li, J. G.
2014-09-01
Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
NASA Astrophysics Data System (ADS)
Hosono, Satsuki; Kawashima, Natsumi; Wollherr, Dirk; Ishimaru, Ichiro
2016-05-01
The distributed networks for information collection of chemical components with high-mobility objects, such as drones or smartphones, will work effectively for investigations, clarifications and predictions against unexpected local terrorisms and disasters like localized torrential downpours. We proposed and reported the proposed spectroscopic line-imager for smartphones in this conference. In this paper, we will mention the wide-area spectroscopic-image construction by estimating 6 DOF (Degrees Of Freedom: parallel movements=x,y,z and rotational movements=θx, θy, θz) from line data to observe and analyze surrounding chemical-environments. Recently, smartphone movies, what were photographed by peoples happened to be there, had worked effectively to analyze what kinds of phenomenon had happened around there. But when a gas tank suddenly blew up, we did not recognize from visible-light RGB-color cameras what kinds of chemical gas components were polluting surrounding atmospheres. Conventionally Fourier spectroscopy had been well known as chemical components analysis in laboratory usages. But volatile gases should be analyzed promptly at accident sites. And because the humidity absorption in near and middle infrared lights has very high sensitivity, we will be able to detect humidity in the sky from wide field spectroscopic image. And also recently, 6-DOF sensors are easily utilized for estimation of position and attitude for UAV (Unmanned Air Vehicle) or smartphone. But for observing long-distance views, accuracies of angle measurements were not sufficient to merge line data because of leverage theory. Thus, by searching corresponding pixels between line spectroscopic images, we are trying to estimate 6-DOF in high accuracy.
2015-08-14
Bursts of pink and red, dark lanes of mottled cosmic dust, and a bright scattering of stars — this NASA/ESA Hubble Space Telescope image shows part of a messy barred spiral galaxy known as NGC 428. It lies approximately 48 million light-years away from Earth in the constellation of Cetus (The Sea Monster). Although a spiral shape is still just about visible in this close-up shot, overall NGC 428’s spiral structure appears to be quite distorted and warped, thought to be a result of a collision between two galaxies. There also appears to be a substantial amount of star formation occurring within NGC 428 — another telltale sign of a merger. When galaxies collide their clouds of gas can merge, creating intense shocks and hot pockets of gas, and often triggering new waves of star formation. NGC 428 was discovered by William Herschel in December 1786. More recently a type of supernova designated SN2013ct was discovered within the galaxy by Stuart Parker of the BOSS (Backyard Observatory Supernova Search) project in Australia and New Zealand, although it is unfortunately not visible in this image. This image was captured by Hubble’s Advanced Camera for Surveys (ACS) and Wide Field and Planetary Camera 2 (WFPC2). Image credit: ESA/Hubble and NASA and S. Smartt (Queen's University Belfast), Acknowledgements: Nick Rose and Flickr user pennine cloud NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Monocular depth perception using image processing and machine learning
NASA Astrophysics Data System (ADS)
Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek
2011-10-01
This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.
Real-time implementation of camera positioning algorithm based on FPGA & SOPC
NASA Astrophysics Data System (ADS)
Yang, Mingcao; Qiu, Yuehong
2014-09-01
In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.
Noisy Ocular Recognition Based on Three Convolutional Neural Networks
Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung
2017-01-01
In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods. PMID:29258217
NASA Technical Reports Server (NTRS)
1981-01-01
A vortex, or large atmospheric storm, is visible at 74` north latitude in this color composite of Voyager 2 Saturn images obtained Aug. 25 from a range of 1 million kilometers (620,000 miles). Three wide-angle-camera images taken through green, orange and blue filters were used. This particular storm system seems to be one of the few large-scale structures in Saturn's polar region, which otherwise is dominated by much smaller-scale features suggesting convection. The darker, bluish structure (upper right) oriented east to west strongly suggests the presence of a jet stream at these high latitudes. The appearance of a strong east-west flow in the polar-region could have a major influence on models of Saturn's atmospheric circulation, if the existence of such a flow can be substantiated in time sequences of Voyager images. The smallest features visible in this photograph are about 20 km. (12 mi.) across. The Voyager project is managed for NASA by the Jet Propulsion Laboratory, Pasadena, Calif.
2015-09-14
The night sides of Saturn and Tethys are dark places indeed. We know that shadows are darker areas than sunlit areas, and in space, with no air to scatter the light, shadows can appear almost totally black. Tethys (660 miles or 1,062 kilometers across) is just barely seen in the lower left quadrant of this image below the ring plane and has been brightened by a factor of three to increase its visibility. The wavy outline of Saturn's polar hexagon is visible at top center. This view looks toward the sunlit side of the rings from about 10 degrees above the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on Jan. 15, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was obtained at a distance of approximately 1.5 million miles (2.4 million kilometers) from Saturn. Image scale is 88 miles (141 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18333
2015-01-05
What's that bright point of light in the outer A ring? It's a star, bright enough to be visible through the ring! Quick, make a wish! This star -- seen in the lower right quadrant of the image -- was not captured by coincidence, it was part of a stellar occultation. By monitoring the brightness of stars as they pass behind the rings, scientists using this powerful observation technique can inspect detailed structures within the rings and how they vary with location. This view looks toward the sunlit side of the rings from about 44 degrees above the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 8, 2013. The view was acquired at a distance of approximately 1.1 million miles (1.8 million kilometers) from the rings and at a Sun-Rings-Spacecraft, or phase, angle of 96 degrees. Image scale is 6.8 miles (11 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18297
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
2004-03-19
Bands and spots in Saturn's atmosphere, including a dark band south of the equator with a scalloped border, are visible in this image from the Cassini-Huygens spacecraft. The narrow angle camera took the image in blue light on Feb. 29, 2004. The distance to Saturn was 59.9 million kilometers (37.2 million miles). The image scale is 359 kilometers (223 miles) per pixel. Three of Saturn's moons are seen in the image: Enceladus (499 kilometers, or 310 miles across) at left; Mimas (398 kilometers, or 247 miles across) left of Saturn's south pole; and Rhea (1,528 kilometers, or 949 miles across) at lower right. The imaging team enhanced the brightness of the moons to aid visibility. The BL1 broadband spectral filter (centered at 451 nanometers) allows Cassini to "see" light in a part of the spectrum visible as the color blue to human eyes. Scientist can combine images made with this filter with those taken with red and green filters to create full-color composites. Scientists can also assess cloud heights by combining images from the blue filter with images taken in other spectral regions. For example, the bright clouds that form the equatorial zone are the highest in altitude and have pressures at their tops of about one quarter of Earth's atmospheric pressure at sea level. The cloud tops at middle latitudes are lower in altitude and have higher pressures of about half that found at sea level. Analysis of Saturn images like this one will be extremely useful to researchers assessing cloud altitudes during the Cassini-Huygens mission. http://photojournal.jpl.nasa.gov/catalog/PIA05383
NASA Technical Reports Server (NTRS)
1978-01-01
In public and private archives throughout the world there are many historically important documents that have become illegible with the passage of time. They have faded, been erased, acquired mold, water and dirt stain, suffered blotting or lost readability in other ways. While ultraviolet and infrared photography are widely used to enhance deteriorated legibility, these methods are more limited in their effectiveness than the space-derived image enhancement technique. The aim of the JPL effort with Caltech and others is to better define the requirements for a system to restore illegible information for study at a low page-cost with simple operating procedures. The investigators' principle tools are a vidicon camera and an image processing computer program, the same equipment used to produce sharp space pictures. The camera is the same type as those on NASA's Mariner spacecraft which returned to Earth thousands of images of Mars, Venus and Mercury. Space imagery works something like television. The vidicon camera does not take a photograph in the ordinary sense; rather it "scans" a scene, recording different light and shade values which are reproduced as a pattern of dots, hundreds of dots to a line, hundreds of lines in the total picture. The dots are transmitted to an Earth receiver, where they are assembled line by line to form a picture like that on the home TV screen.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Atmospheric imaging results from the Mars Exploration Rovers
NASA Astrophysics Data System (ADS)
Lemmon, M.; Athena Science Team
The Athena science payload of the Spirit and Opportunity Mars Exploration Rovers contains instruments capable of measuring radiometric properties of the Martian atmosphere in the visible and the thermal infrared. Remote sensing instruments include Pancam, a color panoramic camera covering 0.4-1.0 microns, and Mini-TES, a thermal infrared spectrometer covering 5-29 microns. Results from atmospheric imaging by Pancam will be covered here. Visible and near-infrared aerosol opacity is monitored by direct solar imaging. Early results show dust opacity near 1 when both rovers landed. Both Spirit and Opportunity have seen dust opacity fall with time, somewhat faster at Spirit's Gusev crater landing site. Diurnal variations are also being monitored at both sites. There is no direct probe of the dust's vertical distribution, but images of the Sun near the horizon and of the twilight will provide constraints on the dust distribution. Dust optical properties and a cross-section weighted aerosol size will be estimated from Pancam images of the sky at varying geometries and times of day. A series of sky imaging sequences has been run with varying illumination geometry. The observations are similar to those reported for Mars Pathfinder.
Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images
Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.
2015-01-01
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025
Early forest fire detection using principal component analysis of infrared video
NASA Astrophysics Data System (ADS)
Saghri, John A.; Radjabi, Ryan; Jacobs, John T.
2011-09-01
A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.
Hydrogen Flame Imaging System Soars to New, Different Heights
NASA Technical Reports Server (NTRS)
2002-01-01
When Judy and Dave Duncan of Auburn, Calif.-based Duncan Technologies Inc. (DTI) developed their color hydrogen flame imaging system in the early 1990's, their market prospects were limited. 'We talked about commercializing the technology in the hydrogen community, but we also looked at commercialization on a much broader aspect. While there were some hydrogen applications, the market was not large enough to suppport an entire company; also, safety issues were a concern,' said Judy Duncan, owner and CEO of Duncan Technologies. Using the basic technology developed under the Small Business Innovation Research Program (SBIR); DTI conducted market research, identified other applications, formulated a plan for next generation development, and implemented a far-reaching marketing strategy. 'We took that technology; reinvested our own funds and energy into a second-generation design on the overall camera electronics and deployed that basic technology intially in a series of what we call multi-spectral cameras; cameras that could image in both the visible range and the infrared,' explains Duncan. 'The SBIR program allowed us to develop the technology to do a 3CCD camera, which very few compaines in the world do, particularly not small companies. The fact that we designed our own prism and specked the coding as we had for the hydrogen application, we were able to create a custom spectral configuration which could support varying types of research and applications.' As a result, Duncan Technologies Inc. of Auburn, Ca., has achieved a milestone $ 1 million in sales.
Chew, Avenell L.; Sampson, Danuta M.; Kashani, Irwin; Chen, Fred K.
2017-01-01
Purpose We compared cone density measurements derived from the center of gaze-directed single images with reconstructed wide-field montages using the rtx1 adaptive optics (AO) retinal camera. Methods A total of 29 eyes from 29 healthy subjects were imaged with the rtx1 camera. Of 20 overlapping AO images acquired, 12 (at 3.2°, 5°, and 7°) were used for calculating gaze-directed cone densities. Wide-field AO montages were reconstructed and cone densities were measured at the corresponding 12 loci as determined by field projection relative to the foveal center aligned to the foveal dip on optical coherence tomography. Limits of agreement in cone density measurement between single AO images and wide-field AO montages were calculated. Results Cone density measurements failed in 1 or more gaze directions or retinal loci in up to 58% and 33% of the subjects using single AO images or wide-field AO montage, respectively. Although there were no significant overall differences between cone densities derived from single AO images and wide-field AO montages at any of the 12 gazes and locations (P = 0.01–0.65), the limits of agreement between the two methods ranged from as narrow as −2200 to +2600, to as wide as −4200 to +3800 cones/mm2. Conclusions Cone density measurement using the rtx1 AO camera is feasible using both methods. Local variation in image quality and altered visibility of cones after generating montages may contribute to the discrepancies. Translational Relevance Cone densities from single AO images are not interchangeable with wide-field montage derived–measurements. PMID:29285417
A study on a portable fluorescence imaging system
NASA Astrophysics Data System (ADS)
Chang, Han-Chao; Wu, Wen-Hong; Chang, Chun-Li; Huang, Kuo-Cheng; Chang, Chung-Hsing; Chiu, Shang-Chen
2011-09-01
The fluorescent reaction is that an organism or dye, excited by UV light (200-405 nm), emits a specific frequency of light; the light is usually a visible or near infrared light (405-900 nm). During the UV light irradiation, the photosensitive agent will be induced to start the photochemical reaction. In addition, the fluorescence image can be used for fluorescence diagnosis and then photodynamic therapy can be given to dental diseases and skin cancer, which has become a useful tool to provide scientific evidence in many biomedical researches. However, most of the methods on acquiring fluorescence biology traces are still stay in primitive stage, catching by naked eyes and researcher's subjective judgment. This article presents a portable camera to obtain the fluorescence image and to make up a deficit from observer competence and subjective judgment. Furthermore, the portable camera offers the 375nm UV-LED exciting light source for user to record fluorescence image and makes the recorded image become persuasive scientific evidence. In addition, when the raising the rate between signal and noise, the signal processing module will not only amplify the fluorescence signal up to 70 %, but also decrease the noise significantly from environmental light on bill and nude mouse testing.
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
Yang, Hualei; Yang, Xi; Heskel, Mary; ...
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Hualei; Yang, Xi; Heskel, Mary
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
NASA Technical Reports Server (NTRS)
2005-01-01
These views, taken two hours apart, demonstrate the dramatic variability in the structure of Saturn's intriguing F ring. In the image at the left, ringlets in the F ring and Encke Gap display distinctive kinks, and there is a bright patch of material on the F ring's inner edge. Saturn's moon Janus (181 kilometers, or 113 miles across) is shown here, partly illuminated by reflected light from the planet. At the right, Prometheus (102 kilometers, or 63 miles across) orbits ahead of the radial striations in the F ring, called 'drapes' by scientists. The drapes appear to be caused by successive passes of Prometheus as it reaches the greatest distance (apoapse) in its orbit of Saturn. Also in this image, the outermost ringlet visible in the Encke Gap displays distinctive bright patches. These views were obtained from about three degrees below the ring plane. The images were taken in visible light with the Cassini spacecraft narrow-angle camera on June 29, 2005, when Cassini was about 1.5 million kilometers (900,000 miles) from Saturn. The image scale is about 9 kilometers (6 miles) per pixel.2004-12-20
Three sizeable impact craters, including one with a marked central peak, lie along the line that divides day and night on the Saturnian moon, Dione (dee-OH-nee), which is 1,118 kilometers, or 695 miles across. The low angle of the Sun along the terminator, as this dividing line is called, brings details like these craters into sharp relief. This view shows principally the leading hemisphere of Dione. Some of this moon's bright, wispy streaks can be seen curling around its eastern limb. Cassini imaged the wispy terrain at high resolution during its first Dione flyby on Dec. 14, 2004. This image was taken in visible light with the Cassini spacecraft narrow angle camera on Nov. 1, 2004, at a distance of 2.4 million kilometers (1.5 million miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 106 degrees. North is up. The image scale is 14 kilometers (8.7 miles) per pixel. The image has been magnified by a factor of two and contrast-enhanced to aid visibility of surface features. http://photojournal.jpl.nasa.gov/catalog/PIA06542
2017-12-08
Spiral galaxy NGC 3274 is a relatively faint galaxy located over 20 million light-years away in the constellation of Leo (The Lion). This NASA/ESA Hubble Space Telescope image comes courtesy of Hubble's Wide Field Camera 3 (WFC3), whose multi-color vision allows astronomers to study a wide range of targets, from nearby star formation to galaxies in the most remote regions of the cosmos. This image combines observations gathered in five different filters, bringing together ultraviolet, visible and infrared light to show off NGC 3274 in all its glory. NGC 3274 was discovered by Wilhelm Herschel in 1783. The galaxy PGC 213714 is also visible on the upper right of the frame, located much farther away from Earth. Image Credit: ESA/Hubble & NASA, D. Calzetti NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Gilmore, Mark; Hsu, Scott
2015-11-01
The goal of the Plasma Liner eXperiment PLX-alpha at Los Alamos National Laboratory is to establish the viability of creating a spherically imploding plasma liner for MIF and HED applications, using a spherical array of supersonic plasma jets launched by innovative contoured-gap coaxial plasma guns. PLX- α experiments will focus in particular on establishing the ram pressure and uniformity scalings of partial and fully spherical plasma liners. In order to characterize these parameters experimentally, a suite of diagnostics is planned, including multi-camera fast imaging, a 16-channel visible interferometer (upgraded from 8 channels) with reconfigurable, fiber-coupled front end, and visible and VUV high-resolution and survey spectroscopy. Tomographic reconstruction and data fusion techniques will be used in conjunction with interferometry, imaging, and synthetic diagnostics from modeling to characterize liner uniformity in 3D. Diagnostic and data analysis design, implementation, and status will be presented. Supported by the Advanced Research Projects Agency - Energy - U.S. Department of Energy.
2016-10-17
Pandora is seen here, in isolation beside Saturn's kinked and constantly changing F ring. Pandora (near upper right) is 50 miles (81 kilometers) wide. The moon has an elongated, potato-like shape (see PIA07632). Two faint ringlets are visible within the Encke Gap, near lower left. The gap is about 202 miles (325 kilometers) wide. The much narrower Keeler Gap, which lies outside the Encke Gap, is maintained by the diminutive moon Daphnis (not seen here). This view looks toward the sunlit side of the rings from about 23 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Aug. 12, 2016. The view was acquired at a distance of approximately 907,000 miles (1.46 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 113 degrees. Image scale is 6 miles (9 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20504
View of STS-100 orbiter Endeavour approaching for docking
2001-04-21
ISS002-E-5876 (21 April 2001) --- A distant view of the Space Shuttle Endeavour preparing to dock with the International Space Station (ISS) during the STS-100 mission. The STS-100 crewmembers are delivering the Canadarm2, Space Station Remote Manipulator System (SSRMS), and equipment stowed in the Multipurpose Logistics Module (MPLM) Raphaello to the ISS which are visible in Endeavour's payload bay. The image was taken with a digital still camera.
View of STS-100 orbiter Endeavour approaching for docking
2001-04-21
ISS002-E-5887 (21 April 2001) --- A view of the Space Shuttle Endeavour preparing to dock with the International Space Station (ISS) during the STS-100 mission. The STS-100 crewmembers are delivering the Canadarm2, Space Station Remote Manipulator System (SSRMS), and equipment stowed in the Multipurpose Logistics Module (MPLM) Raphaello to the ISS which are visible in Endeavour's payload bay. The image was taken with a digital still camera.
2016-09-19
Pan may be small as satellites go, but like many of Saturn's ring moons, it has a has a very visible effect on the rings. Pan (17 miles or 28 kilometers across, left of center) holds open the Encke gap and shapes the ever-changing ringlets within the gap (some of which can be seen here). In addition to raising waves in the A and B rings, other moons help shape the F ring, the outer edge of the A ring and open the Keeler gap. This view looks toward the sunlit side of the rings from about 8 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 2, 2016. The view was acquired at a distance of approximately 840,000 miles (1.4 million kilometers) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 128 degrees. Image scale is 5 miles (8 kilometers) per pixel. Pan has been brightened by a factor of two to enhance its visibility. http://photojournal.jpl.nasa.gov/catalog/PIA20499
Single Pixel Black Phosphorus Photodetector for Near-Infrared Imaging.
Miao, Jinshui; Song, Bo; Xu, Zhihao; Cai, Le; Zhang, Suoming; Dong, Lixin; Wang, Chuan
2018-01-01
Infrared imaging systems have wide range of military or civil applications and 2D nanomaterials have recently emerged as potential sensing materials that may outperform conventional ones such as HgCdTe, InGaAs, and InSb. As an example, 2D black phosphorus (BP) thin film has a thickness-dependent direct bandgap with low shot noise and noncryogenic operation for visible to mid-infrared photodetection. In this paper, the use of a single-pixel photodetector made with few-layer BP thin film for near-infrared imaging applications is demonstrated. The imaging is achieved by combining the photodetector with a digital micromirror device to encode and subsequently reconstruct the image based on compressive sensing algorithm. Stationary images of a near-infrared laser spot (λ = 830 nm) with up to 64 × 64 pixels are captured using this single-pixel BP camera with 2000 times of measurements, which is only half of the total number of pixels. The imaging platform demonstrated in this work circumvents the grand challenges of scalable BP material growth for photodetector array fabrication and shows the efficacy of utilizing the outstanding performance of BP photodetector for future high-speed infrared camera applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
About Jupiter's Reflectance Function in JunoCam Images
NASA Astrophysics Data System (ADS)
Eichstaedt, G.; Orton, G. S.; Momary, T.; Hansen, C. J.; Caplinger, M.
2017-09-01
NASA's Juno spacecraft has successfully completed several perijove passes. JunoCam is Juno's visible light and infrared camera. It was added to the instrument complement to investigate Jupiter's polar regions, and for education and public outreach purposes. Images of Jupiter taken by JunoCam have been revealing effects that can be interpreted as caused by a haze layer. This presumed haze layer appears to be structured, and it partially obscures Jupiter's cloud top. With empirical investigation of Jupiter's reflectance function we intend to separate light contributed by haze from light reflected off Jupiter's cloud tops, enabling both layers to be investigated separately.
Spirit Beside 'Home Plate,' Sol 1809
NASA Technical Reports Server (NTRS)
2009-01-01
NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical projection with geometric seam correction.The IHW island network. [International Halley Watch
NASA Technical Reports Server (NTRS)
Niedner, Malcolm B., Jr.; Liller, William
1987-01-01
Early astronomical photography of comets at perihelion encouraged the establishment of an International Halley Watch (IHW) Team for regularly photographing the Comet. The February 1986 period was particularly troublesome due to the limitations of cometary visibility in the Southern Hemisphere. Schmidt cameras were placed on Tahiti, Easter Island, Faraday Station on the Antarctic Peninsula, Reunion Island and in South Africa. Blue- and red-filter B/W images were obtained every night and color prints were occasionally shot. Each night's images were examined before the next night's photography. Several interesting anecdotes are recounted from shipping, manning and operation of the telescopes.
Two-dimensional vacuum ultraviolet images in different MHD events on the EAST tokamak
NASA Astrophysics Data System (ADS)
Zhijun, WANG; Xiang, GAO; Tingfeng, MING; Yumin, WANG; Fan, ZHOU; Feifei, LONG; Qing, ZHUANG; EAST Team
2018-02-01
A high-speed vacuum ultraviolet (VUV) imaging telescope system has been developed to measure the edge plasma emission (including the pedestal region) in the Experimental Advanced Superconducting Tokamak (EAST). The key optics of the high-speed VUV imaging system consists of three parts: an inverse Schwarzschild-type telescope, a micro-channel plate (MCP) and a visible imaging high-speed camera. The VUV imaging system has been operated routinely in the 2016 EAST experiment campaign. The dynamics of the two-dimensional (2D) images of magnetohydrodynamic (MHD) instabilities, such as edge localized modes (ELMs), tearing-like modes and disruptions, have been observed using this system. The related VUV images are presented in this paper, and it indicates the VUV imaging system is a potential tool which can be applied successfully in various plasma conditions.
2018-01-15
In this view, individual layers of haze can be distinguished in the upper atmosphere of Titan, Saturn's largest moon. Titan's atmosphere features a rich and complex chemistry originating from methane and nitrogen and evolving into complex molecules, eventually forming the smog that surrounds the moon. This natural color image was taken in visible light with the Cassini spacecraft wide-angle camera on March 31, 2005, at a distance of approximately 20,556 miles (33,083 kilometers) from Titan. The view looks toward the north polar region on the moon's night side. Part of Titan's sunlit crescent is visible at right. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21902
Variable field-of-view visible and near-infrared polarization compound-eye endoscope.
Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J
2012-01-01
A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Pulsed laser linescanner for a backscatter absorption gas imaging system
Kulp, Thomas J.; Reichardt, Thomas A.; Schmitt, Randal L.; Bambha, Ray P.
2004-02-10
An active (laser-illuminated) imaging system is described that is suitable for use in backscatter absorption gas imaging (BAGI). A BAGI imager operates by imaging a scene as it is illuminated with radiation that is absorbed by the gas to be detected. Gases become "visible" in the image when they attenuate the illumination creating a shadow in the image. This disclosure describes a BAGI imager that operates in a linescanned manner using a high repetition rate pulsed laser as its illumination source. The format of this system allows differential imaging, in which the scene is illuminated with light at least 2 wavelengths--one or more absorbed by the gas and one or more not absorbed. The system is designed to accomplish imaging in a manner that is insensitive to motion of the camera, so that it can be held in the hand of an operator or operated from a moving vehicle.
NASA Technical Reports Server (NTRS)
1988-01-01
Papers concerning remote sensing applications for exploration geology are presented, covering topics such as remote sensing technology, data availability, frontier exploration, and exploration in mature basins. Other topics include offshore applications, geobotany, mineral exploration, engineering and environmental applications, image processing, and prospects for future developments in remote sensing for exploration geology. Consideration is given to the use of data from Landsat, MSS, TM, SAR, short wavelength IR, the Geophysical Environmental Research Airborne Scanner, gas chromatography, sonar imaging, the Airborne Visible-IR Imaging Spectrometer, field spectrometry, airborne thermal IR scanners, SPOT, AVHRR, SIR, the Large Format camera, and multitimephase satellite photographs.
NASA Technical Reports Server (NTRS)
2005-01-01
17 August 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows kidney bean-shaped pits, and other pits, formed by erosion in a landscape of frozen carbon dioxide. This images shows one of about a dozen different patterns that are common in various locations across the martian south polar residual cap, an area that has been receiving intense scrutiny by the MGS MOC this year, because it is visible on every orbit and in daylight for most of 2005. Location near: 86.9oS, 6.9oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SpringNASA Technical Reports Server (NTRS)
2006-01-01
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows several small, dark sand dunes and a small crater (about 1 kilometer in diameter) within a much larger crater (not visible in this image). The floor of the larger crater is rough and has been eroded with time. The floor of the smaller crater contains windblown ripples. The steep faces of the dunes point to the east (right), indicating that the dominant winds blew from the west (left). This scene is located near 38.5 S, 347.1 W, and covers an area approximately 3 km (1.9 mi) wide. Sunlight illuminates the landscape from the upper left. This southern autumn image was acquired on 1 July 2006.Calibration of imaging parameters for space-borne airglow photography using city light positions
NASA Astrophysics Data System (ADS)
Hozumi, Yuta; Saito, Akinori; Ejiri, Mitsumu K.
2016-09-01
A new method for calibrating imaging parameters of photographs taken from the International Space Station (ISS) is presented in this report. Airglow in the mesosphere and the F-region ionosphere was captured on the limb of the Earth with a digital single-lens reflex camera from the ISS by astronauts. To utilize the photographs as scientific data, imaging parameters, such as the angle of view, exact position, and orientation of the camera, should be determined because they are not measured at the time of imaging. A new calibration method using city light positions shown in the photographs was developed to determine these imaging parameters with high accuracy suitable for airglow study. Applying the pinhole camera model, the apparent city light positions on the photograph are matched with the actual city light locations on Earth, which are derived from the global nighttime stable light map data obtained by the Defense Meteorological Satellite Program satellite. The correct imaging parameters are determined in an iterative process by matching the apparent positions on the image with the actual city light locations. We applied this calibration method to photographs taken on August 26, 2014, and confirmed that the result is correct. The precision of the calibration was evaluated by comparing the results from six different photographs with the same imaging parameters. The precisions in determining the camera position and orientation are estimated to be ±2.2 km and ±0.08°, respectively. The 0.08° difference in the orientation yields a 2.9-km difference at a tangential point of 90 km in altitude. The airglow structures in the photographs were mapped to geographical points using the calibrated imaging parameters and compared with a simultaneous observation by the Visible and near-Infrared Spectral Imager of the Ionosphere, Mesosphere, Upper Atmosphere, and Plasmasphere mapping mission installed on the ISS. The comparison shows good agreements and supports the validity of the calibration. This calibration technique makes it possible to utilize photographs taken on low-Earth-orbit satellites in the nighttime as a reference for the airglow and aurora structures.[Figure not available: see fulltext.
Near-IR and CP-OCT Imaging of Suspected Occlusal Caries Lesions
Simon, Jacob C.; Kang, Hobin; Staninec, Michal; Jang, Andrew T.; Chan, Kenneth H.; Darling, Cynthia L.; Lee, Robert C.; Fried, Daniel
2017-01-01
Introduction Radiographic methods have poor sensitivity for occlusal lesions and by the time the lesions are radiolucent they have typically progressed deep into the dentin. New more sensitive imaging methods are needed to detect occlusal lesions. In this study, cross-polarization optical coherence tomography (CP-OCT) and near-IR imaging were used to image questionable occlusal lesions (QOC's) that were not visible on radiographs but had been scheduled for restoration on 30 test subjects. Methods Near-IR reflectance and transillumination probes incorporating a high definition InGaAs camera and near-IR broadband light sources were used to acquire images of the lesions before restoration. The reflectance probe utilized cross-polarization and operated at wavelengths from 1500–1700-nm where there is an increase in water absorption for higher contrast. The transillumination probe was operated at 1300-nm where the transparency of enamel is highest. Tomographic images (6×6×7 mm3) of the lesions were acquired using a high-speed swept-source CP-OCT system operating at 1300-nm before and after removal of the suspected lesion. Results Near-IR reflectance imaging at 1500–1700-nm yielded significantly higher contrast (p<0.05) of the demineralization in the occlusal grooves compared with visible reflectance imaging. Stains in the occlusal grooves greatly reduced the lesion contrast in the visible range yielding negative values. Only half of the 26 lesions analyzed showed the characteristic surface demineralization and increased reflectivity below the dentinal-enamel junction (DEJ) in 3D OCT images indicative of penetration of the lesion into the dentin. Conclusion This study demonstrates that near-IR imaging methods have great potential for improving the early diagnosis of occlusal lesions. PMID:28339115
NASA Astrophysics Data System (ADS)
Mens, Alain; Alozy, Eric; Aubert, Damien; Benier, Jacky; Bourgade, Jean-Luc; Boutin, Jean-Yves; Brunel, Patrick; Charles, Gilbert; Chollet, Clement; Desbat, Laurent; Gontier, Dominique; Jacquet, Henri-Patrick; Jasmin, Serge; Le Breton, Jean-Pierre; Marchet, Bruno; Masclet-Gobin, Isabelle; Mercier, Patrick; Millier, Philippe; Missault, Carole; Negre, Jean-Paul; Paul, Serge; Rosol, Rodolphe; Sommerlinck, Thierry; Veaux, Jacqueline; Veron, Laurent; Vincent de Araujo, Manuel; Jaanimagi, Paul; Pien, Greg
2003-07-01
This paper gives an overview of works undertaken at CEA/DIF in high speed cinematography, optoelectronic imaging and ultrafast photonics for the needs of the CEA/DAM experimental programs. We have developed a new multichannel velocimeter, and a new probe for shock breakout timing measurements in detonics experiments. A brief description and a recall of their main performances will be made. We have implemented three new optoelectronic imaging systems, in order to observe dynamic scenes in the ranges of 50 - 100 keV and 4 MeV. These systems are described, their main specifications and performances are given. Then we describe our contribution to the ICF program: after recalling the specifications of LIL plasma diagnostics, we describe the features and performances of visible streak tubes, X-ray streak tubes, visible and X-ray framing cameras and the associated systems developed to match these specifications. At last we introduce the subject of components and systems vulnerability in the LMJ target area, the principles identified to mitigate this problem and the first results of studies (image relay, response of streak tube phosphors, MCP image intensifiers and CCDs to fusion neutrons) related to this subject. Results obtained so far are presented.
Sub-micrometer resolution proximity X-ray microscope with digital image registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chkhalo, N. I.; Salashchenko, N. N.; Sherbakov, A. V., E-mail: SherbakovAV@ipm.sci-nnov.ru
A compact laboratory proximity soft X-ray microscope providing submicrometer spatial resolution and digital image registration is described. The microscope consists of a laser-plasma soft X-ray radiation source, a Schwarzschild objective to illuminate the test sample, and a two-coordinate detector for image registration. Radiation, which passes through the sample under study, generates an absorption image on the front surface of the detector. Optical ceramic YAG:Ce was used to convert the X-rays into visible light. An image was transferred from the scintillator to a charge-coupled device camera with a Mitutoyo Plan Apo series lens. The detector’s design allows the use of lensesmore » with numerical apertures of NA = 0.14, 0.28, and 0.55 without changing the dimensions and arrangement of the elements of the device. This design allows one to change the magnification, spatial resolution, and field of view of the X-ray microscope. A spatial resolution better than 0.7 μm and an energy conversion efficiency of the X-ray radiation with a wavelength of 13.5 nm into visible light collected by the detector of 7.2% were achieved with the largest aperture lens.« less
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
Near-infrared imaging of developmental defects in dental enamel.
Hirasuna, Krista; Fried, Daniel; Darling, Cynthia L
2008-01-01
Polarization-sensitive optical coherence tomography (PS-OCT) and near-infrared (NIR) imaging are promising new technologies under development for monitoring early carious lesions. Fluorosis is a growing problem in the United States, and the more prevalent mild fluorosis can be visually mistaken for early enamel demineralization. Unfortunately, there is little quantitative information available regarding the differences in optical properties of sound enamel, enamel developmental defects, and caries. Thirty extracted human teeth with various degrees of suspected fluorosis were imaged using PS-OCT and NIR. An InGaAs camera and a NIR diode laser were used to measure the optical attenuation through transverse tooth sections (approximately 200 microm). A digital microradiography system was used to quantify the enamel defect severity by measurement of the relative mineral loss for comparison with optical scattering measurements. Developmental defects were clearly visible in the polarization-resolved OCT images, demonstrating that PS-OCT can be used to nondestructively measure the depth and possible severity of the defects. Enamel defects on whole teeth that could be imaged with high contrast with visible light were transparent in the NIR. This study suggests that PS-OCT and NIR methods may potentially be used as tools to assess the severity and extent of enamel defects.
Early Results from the Odyssey THEMIS Investigation
NASA Technical Reports Server (NTRS)
Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy
2003-01-01
The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.
MISR Scans the Texas-Oklahoma Border
NASA Technical Reports Server (NTRS)
2000-01-01
These MISR images of Oklahoma and north Texas were acquired on March 12, 2000 during Terra orbit 1243. The three images on the left, from top to bottom, are from the 70-degree forward viewing camera, the vertical-viewing (nadir) camera, and the 70-degree aftward viewing camera. The higher brightness, bluer tinge, and reduced contrast of the oblique views result primarily from scattering of sunlight in the Earth's atmosphere, though some color and brightness variations are also due to differences in surface reflection at the different angles. The longer slant path through the atmosphere at the oblique angles also accentuates the appearance of thin, high-altitude cirrus clouds.On the right, two areas from the nadir camera image are shown in more detail, along with notations highlighting major geographic features. The south bank of the Red River marks the boundary between Texas and Oklahoma. Traversing brush-covered and grassy plains, rolling hills, and prairies, the Red River and the Canadian River are important resources for farming, ranching, public drinking water, hydroelectric power, and recreation. Both originate in New Mexico and flow eastward, their waters eventually discharging into the Mississippi River.A smoke plume to the north of the Ouachita Mountains and east of Lake Eufaula is visible in the detailed nadir imagery. The plume is also very obvious at the 70-degree forward view angle, to the right of center and about one-fourth of the way down from the top of the image.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.First THEMIS Infrared and Visible Images of Mars
NASA Technical Reports Server (NTRS)
2001-01-01
This picture shows both a visible and a thermal infrared image taken by the thermal emission imaging system on NASA's 2001 Mars Odyssey spacecraft on November 2, 2001. The images were taken as part of the ongoing calibration and testing of the camera system as the spacecraft orbited Mars on its 13threvolution of the planet.
The visible wavelength image, shown on the right in black and white, was obtained using one of the instrument's five visible filters. The spacecraft was approximately 22,000 kilometers (about 13,600 miles) above Mars looking down toward the south pole when this image was acquired. It is late spring in the martian southern hemisphere.The thermal infrared image, center, shows the temperature of the surface in color. The circular feature seen in blue is the extremely cold martian south polar carbon dioxide ice cap. The instrument has measured a temperature of minus 120 degrees Celsius (minus 184 degrees Fahrenheit) on the south polar ice cap. The polar cap is more than 900 kilometers (540 miles) in diameter at this time.The visible image shows additional details along the edge of the ice cap, as well as atmospheric hazes near the cap. The view of the surface appears hazy due to dust that still remains in the martian atmosphere from the massive martian dust storms that have occurred over the past several months.The infrared image covers a length of over 6,500 kilometers (3,900 miles)spanning the planet from limb to limb, with a resolution of approximately 5.5 kilometers per picture element, or pixel, (3.4 miles per pixel) at the point directly beneath the spacecraft. The visible image has a resolution of approximately 1 kilometer per pixel (.6 miles per pixel) and covers an area roughly the size of the states of Arizona and New Mexico combined.An annotated image is available at the same resolution in tiff format. Click the image to download (note: it is a 5.2 mB file) [figure removed for brevity, see original site] NASA's Jet Propulsion Laboratory, Pasadena, Calif. manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington D.C. The thermal-emission imaging system was developed at Arizona State University,Tempe, with Raytheon Santa Barbara Remote Sensing, Santa Barbara, Calif. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2006-01-01
At least three different kinds of rocks await scientific analysis at the place where NASA's Mars Exploration Rover Spirit will likely spend several months of Martian winter. They are visible in this picture, which the panoramic camera on Spirit acquired during the rover's 809th sol, or Martian day, of exploring Mars (April 12, 2006). Paper-thin layers of light-toned, jagged-edged rocks protrude horizontally from beneath small sand drifts; a light gray rock with smooth, rounded edges sits atop the sand drifts; and several dark gray to black, angular rocks with vesicles (small holes) typical of hardened lava lie scattered across the sand. This view is an approximately true-color rendering that combines images taken through the panoramic camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.1986-01-14
Range : 12.9 million miles (8.0 million miles) P-29468C This false color Voyager photograph of Uranus shows a discrete cloud seen as a bright streak near the planets limb. The cloud visible here is the most prominent feature seen in a series of Voyager images designed to track atmospheric motions. The occasional donut shaped features, including one at the bottom, are shadows cast by dust on the camera optics. The picture is a highly processed composite of three images. The processing necessary to bring out the faint features on the planet also brings out these camera blemishes. The three seperate images used where shot through violet, blue, and orange filters. Each color image showd the cloud to a different degree; because they were not exposed at the same time , the images were processed to provide a good spatial match. In a true color image, the cloud would be barely discernable; the false color helps to bring out additional details. The different colors imply variations in vertical structure, but as of yet it is not possible to be specific about such differences. One possiblity is that the uranian atmosphere may contain smog like constituents, in which case some color differences may represent differences in how these molecules are distributed.
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
Confocal non-line-of-sight imaging based on the light-cone transform.
O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon
2018-03-15
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Passive radiation detection using optically active CMOS sensors
NASA Astrophysics Data System (ADS)
Dosiek, Luke; Schalk, Patrick D.
2013-05-01
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Icebergs Adrift in the Amundsen Sea
2002-03-27
The Thwaites Ice Tongue is a large sheet of glacial ice extending from the West Antarctic mainland into the southern Amundsen Sea. A large crack in the Thwaites Tongue was discovered in imagery from Terra's Moderate Resolution Imaging SpectroRadiometer (MODIS). Subsequent widening of the crack led to the calving of a large iceberg. The development of this berg, designated B-22 by the National Ice Center, can be observed in these images from the Multi-angle Imaging SpectroRadiometer, also aboard Terra. The two views were acquired by MISR's nadir (vertical-viewing) camera on March 10 and 24, 2002. The B-22 iceberg, located below and to the left of image center, measures approximately 82 kilometers long x 62 kilometers wide. Comparison of the two images shows the berg to have drifted away from the ice shelf edge. The breakup of ice near the shelf edge, in the area surrounding B-22, is also visible in the later image. These natural-color images were acquired during Terra orbits 11843 and 12047, respectively. At the right-hand edge is Pine Island Bay, where the calving of another large iceberg (B-21) occurred in November 2001. B-21 subsequently split into two smaller bergs, both of which are visible to the right of B-22. http://photojournal.jpl.nasa.gov/catalog/PIA03700