Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
The use of near-infrared photography to image fired bullets and cartridge cases.
Stein, Darrell; Yu, Jorn Chi Chung
2013-09-01
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Fisher, Diane K.; Novati, Alexander
2009-01-01
On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.
2005-10-01
As thermal imaging technology matures and ownership costs decrease, there is a trend to equip a greater proportion of airborne surveillance vehicles used by security and defence forces with both visible band and thermal infrared cameras. These cameras are used for tracking vehicles on the ground, to aid in pursuit of villains in vehicles and on foot, while also assisting in the direction and co-ordination of emergency service vehicles as the occasion arises. These functions rely on unambiguous identification of police and the other emergency service vehicles. In the visible band this is achieved by dark markings with high contrast (light) backgrounds on the roof of vehicles. When there is no ambient lighting, for example at night, thermal imaging is used to track both vehicles and people. In the thermal IR, the visible markings are not obvious. At the wavelength thermal imagers operate, either 3-5 microns or 8-12 microns, the dark and light coloured materials have similar low reflectivity. To maximise the usefulness of IR airborne surveillance, a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared is needed. In this paper we discuss the design, application and operation of some vehicle and personnel marking materials and show airborne IR and visible imagery of materials in use.
Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.
Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung
2018-03-19
We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.
Robust Behavior Recognition in Intelligent Surveillance Environments.
Batchuluun, Ganbayar; Kim, Yeong Gon; Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2016-06-30
Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR) cameras, thermal cameras (based on medium-wavelength infrared (MWIR), and long-wavelength infrared (LWIR) light) have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance) for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.
Perez-Mendez, V.
1997-01-21
A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.
Perez-Mendez, Victor
1997-01-01
A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.
CubeSat Nighttime Earth Observations
NASA Astrophysics Data System (ADS)
Pack, D. W.; Hardy, B. S.; Longcore, T.
2017-12-01
Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.
[Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].
Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei
2016-02-01
We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-01-01
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-05-10
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
NASA Astrophysics Data System (ADS)
Do, Trong Hop; Yoo, Myungsik
2018-01-01
This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Fast visible imaging of turbulent plasma in TORPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Diallo, A.; Fasoli, A.
2008-10-15
Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
A high resolution IR/visible imaging system for the W7-X limiter
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.
2016-11-01
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.
A high resolution IR/visible imaging system for the W7-X limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenbuerger, S.; Brandt, C.; Brochard, F.
2010-06-15
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less
NASA Astrophysics Data System (ADS)
Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.
2010-06-01
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.
Performance analysis and enhancement for visible light communication using CMOS sensors
NASA Astrophysics Data System (ADS)
Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Fang, Liangtao; Liu, Xiaowei; Chen, Yingcong
2018-03-01
Complementary Metal-Oxide-Semiconductor (CMOS) sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these camera can be used as the receivers of visible light communication (VLC). Using the rolling shutter mechanism can increase the data rate of VLC based on CMOS camera, and different techniques have been proposed to improve the demodulation of the rolling shutter mechanism. However, these techniques are too complexity. In this work, we demonstrate and analyze the performance of the VLC link using CMOS camera for different LED luminaires for the first time in our knowledge. Experimental evaluation to compare their bit-error-rate (BER) performances and demodulation are also performed, and it can be summarized that just need to change the LED luminaire with more uniformity light output, the blooming effect would not exist; which not only can reduce the complexity of the demodulation but also enhance the communication quality. In addition, we propose and demonstrate to use contrast limited adaptive histogram equalization to extend the transmission distance and mitigate the influence of the background noise. And the experimental results show that the BER can be decreased by an order of magnitude by using the proposed method.
Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras
NASA Astrophysics Data System (ADS)
Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team
2018-01-01
NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.
Use of cameras for monitoring visibility impairment
NASA Astrophysics Data System (ADS)
Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie
2018-02-01
Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA MONTAGE
NASA Technical Reports Server (NTRS)
2002-01-01
This picture, taken in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2), represents a sweeping view of the 30 Doradus Nebula. But Hubble's infrared camera - the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) - has probed deeper into smaller regions of this nebula to unveil the stormy birth of massive stars. The montages of images in the upper left and upper right represent this deeper view. Each square in the montages is 15.5 light-years (19 arcseconds) across. The brilliant cluster R136, containing dozens of very massive stars, is at the center of this image. The infrared and visible-light views reveal several dust pillars that point toward R136, some with bright stars at their tips. One of them, at left in the visible-light image, resembles a fist with an extended index finger pointing directly at R136. The energetic radiation and high-speed material emitted by the massive stars in R136 are responsible for shaping the pillars and causing the heads of some of them to collapse, forming new stars. The infrared montage at upper left is enlarged in an accompanying image. Credits for NICMOS montages: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
2001 Mars Odyssey Images Earth (Visible and Infrared)
NASA Technical Reports Server (NTRS)
2001-01-01
2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) acquired these images of the Earth using its visible and infrared cameras as it left the Earth. The visible image shows the thin crescent viewed from Odyssey's perspective. The infrared image was acquired at exactly the same time, but shows the entire Earth using the infrared's 'night-vision' capability. Invisible light the instrument sees only reflected sunlight and therefore sees nothing on the night side of the planet. In infrared light the camera observes the light emitted by all regions of the Earth. The coldest ground temperatures seen correspond to the nighttime regions of Antarctica; the warmest temperatures occur in Australia. The low temperature in Antarctica is minus 50 degrees Celsius (minus 58 degrees Fahrenheit); the high temperature at night in Australia 9 degrees Celsius(48.2 degrees Fahrenheit). These temperatures agree remarkably well with observed temperatures of minus 63 degrees Celsius at Vostok Station in Antarctica, and 10 degrees Celsius in Australia. The images were taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19,2001 as the Odyssey spacecraft left Earth.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
HUBBLE FINDS A BARE BLACK HOLE POURING OUT LIGHT
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Hubble Space Telescope has provided a never-before-seen view of a warped disk flooded with a torrent of ultraviolet light from hot gas trapped around a suspected massive black hole. [Right] This composite image of the core of the galaxy was constructed by combining a visible light image taken with Hubble's Wide Field Planetary Camera 2 (WFPC2), with a separate image taken in ultraviolet light with the Faint Object Camera (FOC). While the visible light image shows a dark dust disk, the ultraviolet image (color-coded blue) shows a bright feature along one side of the disk. Because Hubble sees ultraviolet light reflected from only one side of the disk, astronomers conclude the disk must be warped like the brim of a hat. The bright white spot at the image's center is light from the vicinity of the black hole which is illuminating the disk. [Left] A ground-based telescopic view of the core of the elliptical galaxy NGC 6251. The inset box shows Hubble Space Telescope's field of view. The galaxy is 300 million light-years away in the constellation Ursa Minor. Photo Credit: Philippe Crane (European Southern Observatory), and NASA
Smartphone Based Platform for Colorimetric Sensing of Dyes
NASA Astrophysics Data System (ADS)
Dutta, Sibasish; Nath, Pabitra
We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.
Impact of New Camera Technologies on Discoveries in Cell Biology.
Stuurman, Nico; Vale, Ronald D
2016-08-01
New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.
Performance Analysis of Visible Light Communication Using CMOS Sensors.
Do, Trong-Hop; Yoo, Myungsik
2016-02-29
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.
Performance Analysis of Visible Light Communication Using CMOS Sensors
Do, Trong-Hop; Yoo, Myungsik
2016-01-01
This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi
1997-04-01
Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.
Sundaramoorthy, Sriramkumar; Badaracco, Adrian Garcia; Hirsch, Sophia M.; Park, Jun Hong; Davies, Tim; Dumont, Julien; Shirasu-Hiza, Mimi; Kummel, Andrew C.; Canman, Julie C.
2017-01-01
The combination of near infrared (NIR) and visible wavelengths in light microscopy for biological studies is increasingly common. For example, many fields of biology are developing the use of NIR for optogenetics, in which an NIR laser induces a change in gene expression and/or protein function. One major technical barrier in working with both NIR and visible light on an optical microscope is obtaining their precise coalignment at the imaging plane position. Photon upconverting particles (UCPs) can bridge this gap as they are excited by NIR light but emit in the visible range via an anti-Stokes luminescence mechanism. Here, two different UCPs have been identified, high-efficiency micro540-UCPs and lower efficiency nano545-UCPs, that respond to NIR light and emit visible light with high photostability even at very high NIR power densities (>25,000 Suns). Both of these UCPs can be rapidly and reversibly excited by visible and NIR light and emit light at visible wavelengths detectable with standard emission settings used for Green Fluorescent Protein (GFP), a commonly used genetically-encoded fluorophore. However, the high efficiency micro540-UCPs were suboptimal for NIR and visible light coalignment, due to their larger size and spatial broadening from particle-to-particle energy transfer consistent with a long lived excited state and saturated power dependence. In contrast, the lower efficiency nano-UCPs were superior for precise coalignment of the NIR beam with the visible light path (~2 µm versus ~8 µm beam broadening respectively) consistent with limited particle-to-particle energy transfer, superlinear power dependence for emission, and much smaller particle size. Furthermore, the nano-UCPs were superior to a traditional two-camera method for NIR and visible light path alignment in an in vivo Infrared-Laser-Evoked Gene Operator (IR-LEGO) optogenetics assay in the budding yeast S. cerevisiae. In summary, nano-UCPs are powerful new tools for coaligning NIR and visible light paths on a light microscope. PMID:28221018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Confocal retinal imaging using a digital light projector with a near infrared VCSEL source
NASA Astrophysics Data System (ADS)
Muller, Matthew S.; Elsner, Ann E.
2018-02-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Orbital docking system centerline color television camera system test
NASA Technical Reports Server (NTRS)
Mongan, Philip T.
1993-01-01
A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.
Night vision imaging system design, integration and verification in spacecraft vacuum thermal test
NASA Astrophysics Data System (ADS)
Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing
2015-08-01
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source
Muller, Matthew S.; Elsner, Ann E.
2018-01-01
A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586
Prasad, Ankush; Pospíšil, Pavel
2012-08-01
Solar radiation that reaches Earth's surface can have severe negative consequences for organisms. Both visible light and ultraviolet A (UVA) radiation are known to initiate the formation of reactive oxygen species (ROS) in human skin by photosensitization reactions (types I and II). In the present study, we investigated the role of visible light and UVA radiation in the generation of ROS on the dorsal and the palmar side of a hand. The ROS are known to oxidize biomolecules such as lipids, proteins, and nucleic acids to form electronically excited species, finally leading to ultraweak photon emission. We have employed a highly sensitive charge coupled device camera and a low-noise photomultiplier tube for detection of two-dimensional and one-dimensional ultraweak photon emission, respectively. Our experimental results show that oxidative stress is generated by the exposure of human skin to visible light and UVA radiation. The oxidative stress generated by UVA radiation is claimed to be significantly higher than that by visible light. Two-dimensional photon imaging can serve as a potential tool for monitoring the oxidative stress in the human skin induced by various stress factors irrespective of its physical or chemical nature.
2009-11-03
Bright sunlight on Rhea shows off the cratered surface of Saturn second largest moon in this image captured by NASA Cassini Orbiter. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 21, 2009.
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-01-01
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510
Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung
2017-03-20
Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.
Frangioni, John V
2013-06-25
A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.
2015-04-13
and receiver optimal lighting configuration should be determined and evaluated in dusk, twilight and full dark lunar illumination periods. Degraded...should consist of sunset to nautical twilight . These conditions provide poor illumination for visible cameras, but high for IR ones. Night conditions
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina
Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng
2010-01-01
A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743
Gas Analysis Using Auroral Spectroscopy.
NASA Astrophysics Data System (ADS)
Alozie, M.; Thomas, G.; Medillin, M.
2017-12-01
As part of the Undergraduate Student Instrumentation Project at the University of Houston, an Auroral spectroscope was designed and built. This visible light spectroscope was constructed out of carbon fiber, aluminum, and 3D printed parts. The spectroscope was designed to calculate the wavelengths of the spectral lines and analyze the emitted light spectrum of the gases. The spectroscope contains a primary parabolic 6" mirror and a smaller secondary 2.46" mirror. The light captured through these mirrors will be guided to an optical train that consist of five lenses (1" in diameter and focal length), a slit, and a visible transmission grating. The light will then be led to a Sony Alpha A6000 camera to take images of the spectral lines.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
Increasing of visibility on the pedestrian crossing by the additional lighting systems
NASA Astrophysics Data System (ADS)
Baleja, Richard; Bos, Petr; Novak, Tomas; Sokansky, Karel; Hanusek, Tomas
2017-09-01
Pedestrian crossings are critical places for road accidents between pedestrians and motor vehicles. For this reason, it is very important to increase attention when the pedestrian crossings are designed and it is necessary to take into account all factors that may contribute to higher safety. Additional lighting systems for pedestrian crossings are one of them and the lighting systems must fulfil the requirements for higher visibility from the point of view of car drivers from both directions. This paper describes the criteria for the suitable additional lighting system on pedestrian crossings. Generally, it means vertical illuminance on the pedestrian crossing from the driver’s view, horizontal illuminance on the crossing and horizontal illuminance both in front of and behind the crossing placed on the road and their acceptable ratios. The article also describes the choice of the colours of the light (correlated colour temperature) and its influence on visibility. As a part of the article, there are case designs of additional lighting systems for pedestrian crossings and measurements from realized additional lighting systems by luxmeters and luminance cameras and their evaluation.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
HandSight: Supporting Everyday Activities through Touch-Vision
2015-10-01
switches between IR and RGB o Large, low resolution, and fixed focal length > 1ft • Raspberry PI NoIR: https://www.raspberrypi.org/products/ pi -noir...camera/ o Raspberry Pi NoIR camera with external visible light filters o Good image quality, manually adjustable focal length, small, programmable 11...purpose and scope of the research. 2. KEYWORDS: Provide a brief list of keywords (limit to 20 words). 3. ACCOMPLISHMENTS: The PI is reminded that
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
2016-11-21
Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508
The PALM-3000 high-order adaptive optics system for Palomar Observatory
NASA Astrophysics Data System (ADS)
Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa
2008-07-01
Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.
Augmented reality in laser laboratories
NASA Astrophysics Data System (ADS)
Quercioli, Franco
2018-05-01
Laser safety glasses block visibility of the laser light. This is a big nuisance when a clear view of the beam path is required. A headset made up of a smartphone and a viewer can overcome this problem. The user looks at the image of the real world on the cellphone display, captured by its rear camera. An unimpeded and safe sight of the laser beam is then achieved. If the infrared blocking filter of the smartphone camera is removed, the spectral sensitivity of the CMOS image sensor extends in the near infrared region up to 1100 nm. This substantial improvement widens the usability of the device to many laser systems for industrial and medical applications, which are located in this spectral region. The paper describes this modification of a phone camera to extend its sensitivity beyond the visible and make a true augmented reality laser viewer.
Mask-to-wafer alignment system
Sweatt, William C.; Tichenor, Daniel A.; Haney, Steven J.
2003-11-04
A modified beam splitter that has a hole pattern that is symmetric in one axis and anti-symmetric in the other can be employed in a mask-to-wafer alignment device. The device is particularly suited for rough alignment using visible light. The modified beam splitter transmits and reflects light from a source of electromagnetic radiation and it includes a substrate that has a first surface facing the source of electromagnetic radiation and second surface that is reflective of said electromagnetic radiation. The substrate defines a hole pattern about a central line of the substrate. In operation, an input beam from a camera is directed toward the modified beam splitter and the light from the camera that passes through the holes illuminates the reticle on the wafer. The light beam from the camera also projects an image of a corresponding reticle pattern that is formed on the mask surface of the that is positioned downstream from the camera. Alignment can be accomplished by detecting the radiation that is reflected from the second surface of the modified beam splitter since the reflected radiation contains both the image of the pattern from the mask and a corresponding pattern on the wafer.
X-ray ‘ghost images’ could cut radiation doses
NASA Astrophysics Data System (ADS)
Chen, Sophia
2018-03-01
On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
NASA Astrophysics Data System (ADS)
Le, Nam-Tuan
2017-05-01
Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.
1970-01-01
This 1970 photograph shows the flight unit for Skylab's White Light Coronagraph, an Apollo Telescope Mount (ATM) facility that photographed the solar corona in the visible light spectrum. A TV camera in the instrument provided real-time pictures of the occulted Sun to the astronauts at the control console and also transmitted the images to the ground. The Marshall Space Flight Center had program management responsibility for the development of Skylab hardware and experiments.
Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.
Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung
2017-10-28
Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.
K2 and Herschel/PACS light curve of the Centaur 2060 Chiron
NASA Astrophysics Data System (ADS)
Marton, G.; Kiss, C.; Müller, T. G.; Lellouch, E.; Pál, A.; Molnár, L.
2017-09-01
Recently 2060 Chiron was identified to harbor a ring system (Ortiz et al. 2015) similar to the other Centaur 10199 Chariklo (Braga-Ribas et al. 2014). We observed 2060 Chiron in the visible range in Campaign 12 of the Kepler/K2 mission, that lasted from Dec 15 2016 to March 4 2017. We obtained the thermal light curve with the PACS photometer camera of the Herschel Space Observatory as a "Must Do Observation", taken at 70 and 160 μm on 25 December, 2012. The presence of the ring affects the rotational light curve both in the visible range and in the thermal infrared. With our new observations we can disentangle the contribution of the main body and the ring material.
NASA Astrophysics Data System (ADS)
Yasuda, Hideki; Matsuno, Ryo; Koito, Naoki; Hosoda, Hidemasa; Tani, Takeharu; Naya, Masayuki
2017-12-01
Suppression of visible-light reflection from material surfaces is an important technology for many applications such as flat-panel displays, camera lenses, and solar panels. In this study, we developed an anti-reflective coating design based on a silver nanodisc metasurface. The effective refractive index of a 10-nm-thick monolayer of silver nanodiscs was less than 1.0, which enabled strong suppression of reflection from the underlying substrate. The nanodisc structure was easy to fabricate using a conventional roll-to-roll wet-coating method. The anti-reflective structure was fabricated over a large area.
The Visible Imaging System (VIS) for the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.
1995-01-01
The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.
GETTING TO THE HEART OF A GALAXY
NASA Technical Reports Server (NTRS)
2002-01-01
This collage of images in visible and infrared light reveals how the barred spiral galaxy NGC 1365 is feeding material into its central region, igniting massive star birth and probably causing its bulge of stars to grow. The material also is fueling a black hole in the galaxy's core. A galaxy's bulge is a central, football-shaped structure composed of stars, gas, and dust. The black-and-white image in the center, taken by a ground-based telescope, displays the entire galaxy. But the telescope's resolution is not powerful enough to reveal the flurry of activity in the galaxy's hub. The blue box in the galaxy's central region outlines the area observed by the NASA Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The red box pinpoints a narrower view taken by the Hubble telescope's infrared camera, the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). A barred spiral is characterized by a lane of stars, gas, and dust slashing across a galaxy's central region. It has a small bulge that is dominated by a disk of material. The spiral arms begin at both ends of the bar. The bar is funneling material into the hub, which triggers star formation and feeds the bulge. The visible-light picture at upper left is a close-up view of the galaxy's hub. The bright yellow orb is the nucleus. The dark material surrounding the orb is gas and dust that is being funneled into the central region by the bar. The blue regions pinpoint young star clusters. In the infrared image at lower right, the Hubble telescope penetrates the dust seen in the WFPC2 picture to reveal more clusters of young stars. The bright blue dots represent young star clusters; the brightest of the red dots are young star clusters enshrouded in dust and visible only in the infrared image. The fainter red dots are older star clusters. The WFPC2 image is a composite of three filters: near-ultraviolet (3327 Angstroms), visible (5552 Angstroms), and near-infrared (8269 Angstroms). The NICMOS image, taken at a wavelength of 16,000 Angstroms, was combined with the visible and near-infrared wavelengths taken by WFPC2. The WFPC2 image was taken in January 1996; the NICMOS data were taken in April 1998. Credits for the ground-based image: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for the WFPC2 image: NASA and John Trauger (Jet Propulsion Laboratory) Credits for the NICMOS image: NASA, ESA, and C. Marcella Carollo (Columbia University)
Unusual Light in Dark Space Revealed by Los Alamos, NASA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smidt, Joseph
By looking at the dark spaces between visible galaxies and stars the NASA/JPL CIBER sounding rocket experiment has produced data that could redefine what constitutes a galaxy. CIBER, the Cosmic Infrared Background Experiment, is designed to understand the physics going on between visible stars and galaxies. The relatively small, sub-orbital rocket unloads a camera that snaps pictures of the night sky in near-infrared wavelengths, between 1.2 and 1.6 millionth of a meter. Scientists take the data and remove all the known visible stars and galaxies and quantify what is left.
Unusual Light in Dark Space Revealed by Los Alamos, NASA
Smidt, Joseph
2018-01-16
By looking at the dark spaces between visible galaxies and stars the NASA/JPL CIBER sounding rocket experiment has produced data that could redefine what constitutes a galaxy. CIBER, the Cosmic Infrared Background Experiment, is designed to understand the physics going on between visible stars and galaxies. The relatively small, sub-orbital rocket unloads a camera that snaps pictures of the night sky in near-infrared wavelengths, between 1.2 and 1.6 millionth of a meter. Scientists take the data and remove all the known visible stars and galaxies and quantify what is left.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
NASA Astrophysics Data System (ADS)
Chen, Shih-Hao; Chow, Chi-Wai
2015-01-01
Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.
Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing
NASA Technical Reports Server (NTRS)
Crooke, Julie A.
2003-01-01
The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.
Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, H. B., E-mail: rayhb@ornl.gov; Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831; Biewer, T. M.
2016-11-15
Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pelliclemore » beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.« less
Production application of injection-molded diffractive elements
NASA Astrophysics Data System (ADS)
Clark, Peter P.; Chao, Yvonne Y.; Hines, Kevin P.
1995-12-01
We demonstrate that transmission kinoforms for visible light applications can be injection molded in acrylic in production volumes. A camera is described that employs molded Fresnel lenses to change the convergence of a projection ranging system. Kinoform surfaces are used in the projection system to achromatize the Fresnel lenses.
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1: Temperature Map This image composite shows comet Tempel 1 in visible (left) and infrared (right) light (figure 1). The infrared picture highlights the warm, or sunlit, side of the comet, where NASA's Deep Impact probe later hit. These data were acquired about six minutes before impact. The visible image was taken by the medium-resolution camera on the mission's flyby spacecraft, and the infrared data were acquired by the flyby craft's infrared spectrometer.Opto-mechanical system design of test system for near-infrared and visible target
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Zhu, Guodong; Wang, Yuchao
2014-12-01
Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2016-10-01
In this paper, the potential capability of short-wavelength infrared laser gated-viewing for penetrating the pyrotechnic effects smoke and light/heat has been investigated by evaluating data from conducted field trials. The potential of thermal infrared cameras for this purpose has also been considered and the results have been compared to conventional visible cameras as benchmark. The application area is the use in soccer stadiums where pyrotechnics are illegally burned in dense crowds of people obstructing visibility of stadium safety staff and police forces into the involved section of the stadium. Quantitative analyses have been carried out to identify sensor performances. Further, qualitative image comparisons have been presented to give impressions of image quality during the disruptive effects of burning pyrotechnics.
Omnidirectional structured light in a flexible configuration.
Paniagua, Carmen; Puig, Luis; Guerrero, José J
2013-10-14
Structured light is a perception method that allows us to obtain 3D information from images of the scene by projecting synthetic features with a light emitter. Traditionally, this method considers a rigid configuration, where the position and orientation of the light emitter with respect to the camera are known and calibrated beforehand. In this paper we propose a new omnidirectional structured light system in flexible configuration, which overcomes the rigidness of the traditional structured light systems. We propose the use of an omnidirectional camera combined with a conic pattern light emitter. Since the light emitter is visible in the omnidirectional image, the computation of its location is possible. With this information and the projected conic in the omnidirectional image, we are able to compute the conic reconstruction, i.e., the 3D information of the conic in the space. This reconstruction considers the recovery of the depth and orientation of the scene surface where the conic pattern is projected. One application of our proposed structured light system in flexible configuration consists of a wearable omnicamera with a low-cost laser in hand for visual impaired personal assistance.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Optimal design of an earth observation optical system with dual spectral and high resolution
NASA Astrophysics Data System (ADS)
Yan, Pei-pei; Jiang, Kai; Liu, Kai; Duan, Jing; Shan, Qiusha
2017-02-01
With the increasing demand of the high-resolution remote sensing images by military and civilians, Countries around the world are optimistic about the prospect of higher resolution remote sensing images. Moreover, design a visible/infrared integrative optic system has important value in earth observation. Because visible system can't identify camouflage and recon at night, so we should associate visible camera with infrared camera. An earth observation optical system with dual spectral and high resolution is designed. The paper mainly researches on the integrative design of visible and infrared optic system, which makes the system lighter and smaller, and achieves one satellite with two uses. The working waveband of the system covers visible, middle infrared (3-5um). Dual waveband clear imaging is achieved with dispersive RC system. The focal length of visible system is 3056mm, F/# is 10.91. And the focal length of middle infrared system is 1120mm, F/# is 4. In order to suppress the middle infrared thermal radiation and stray light, the second imaging system is achieved and the narcissus phenomenon is analyzed. The system characteristic is that the structure is simple. And the especial requirements of the Modulation Transfer Function (MTF), spot, energy concentration, and distortion etc. are all satisfied.
Kidd, David G; Brethwaite, Andrew
2014-05-01
This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.
The optical design of a visible adaptive optics system for the Magellan Telescope
NASA Astrophysics Data System (ADS)
Kopon, Derek
The Magellan Adaptive Optics system will achieve first light in November of 2012. This AO system contains several subsystems including the 585-actuator concave adaptive secondary mirror, the Calibration Return Optic (CRO) alignment and calibration system, the CLIO 1-5 microm IR science camera, the movable guider camera and active optics assembly, and the W-Unit, which contains both the Pyramid Wavefront Sensor (PWFS) and the VisAO visible science camera. In this dissertation, we present details of the design, fabrication, assembly, alignment, and laboratory performance of the VisAO camera and its optical components. Many of these components required a custom design, such as the Spectral Differential Imaging Wollaston prisms and filters and the coronagraphic spots. One component, the Atmospheric Dispersion Corrector (ADC), required a unique triplet design that had until now never been fabricated and tested on sky. We present the design, laboratory, and on-sky results for our triplet ADC. We also present details of the CRO test setup and alignment. Because Magellan is a Gregorian telescope, the ASM is a concave ellipsoidal mirror. By simulating a star with a white light point source at the far conjugate, we can create a double-pass test of the whole system without the need for a real on-sky star. This allows us to test the AO system closed loop in the Arcetri test tower at its nominal design focal length and optical conjugates. The CRO test will also allow us to calibrate and verify the system off-sky at the Magellan telescope during commissioning and periodically thereafter. We present a design for a possible future upgrade path for a new visible Integral Field Spectrograph. By integrating a fiber array bundle at the VisAO focal plane, we can send light to a pre-existing facility spectrograph, such as LDSS3, which will allow 20 mas spatial sampling and R˜1,800 spectra over the band 0.6-1.05 microm. This would be the highest spatial resolution IFU to date, either from the ground or in space.
Variable field-of-view visible and near-infrared polarization compound-eye endoscope.
Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J
2012-01-01
A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.
A user-friendly technical set-up for infrared photography of forensic findings.
Rost, Thomas; Kalberer, Nicole; Scheurer, Eva
2017-09-01
Infrared photography is interesting for a use in forensic science and forensic medicine since it reveals findings that normally are almost invisible to the human eye. Originally, infrared photography has been made possible by the placement of an infrared light transmission filter screwed in front of the camera objective lens. However, this set-up is associated with many drawbacks such as the loss of the autofocus function, the need of an external infrared source, and long exposure times which make the use of a tripod necessary. These limitations prevented up to now the routine application of infrared photography in forensics. In this study the use of a professional modification inside the digital camera body was evaluated regarding camera handling and image quality. This permanent modification consisted of the replacement of the in-built infrared blocking filter by an infrared transmission filter of 700nm and 830nm, respectively. The application of this camera set-up for the photo-documentation of forensically relevant post-mortem findings was investigated in examples of trace evidence such as gunshot residues on the skin, in external findings, e.g. hematomas, as well as in an exemplary internal finding, i.e., Wischnewski spots in a putrefied stomach. The application of scattered light created by indirect flashlight yielded a more uniform illumination of the object, and the use of the 700nm filter resulted in better pictures than the 830nm filter. Compared to pictures taken under visible light, infrared photographs generally yielded better contrast. This allowed for discerning more details and revealed findings which were not visible otherwise, such as imprints on a fabric and tattoos in mummified skin. The permanent modification of a digital camera by building in a 700nm infrared transmission filter resulted in a user-friendly and efficient set-up which qualified for the use in daily forensic routine. Main advantages were a clear picture in the viewfinder, an auto-focus usable over the whole range of infrared light, and the possibility of using short shutter speeds which allows taking infrared pictures free-hand. The proposed set-up with a modification of the camera allows a user-friendly application of infrared photography in post-mortem settings. Copyright © 2017 Elsevier B.V. All rights reserved.
The Explosive Counterparts of Gravitational Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Astronomy collaborations like the Dark Energy Survey, which Fermilab leads, can track down the visible sources of gravitational waves caused by binary neutron stars. This animation takes you through the collision of two neutron stars, and shows you the explosion of light and energy seen by the Dark Energy Camera on August 17, 2017.
Nondestructive defect detection in laser optical coatings
NASA Astrophysics Data System (ADS)
Marrs, C. D.; Porteus, J. O.; Palmer, J. R.
1985-03-01
Defects responsible for laser damage in visible-wavelength mirrors are observed at nondamaging intensities using a new video microscope system. Studies suggest that a defect scattering phenomenon combined with lag characteristics of video cameras makes this possible. Properties of the video-imaged light are described for multilayer dielectric coatings and diamond-turned metals.
1967-08-01
The Apollo Telescope Mount (ATM), designed and developed by the Marshall Space Flight Center, served as the primary scientific instrument unit aboard the Skylab. The ATM contained eight complex astronomical instruments designed to observe the Sun over a wide spectrum from visible light to x-rays. This photo depicts a mockup of the ATM contamination monitor camera and photometer.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor
Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
2015-10-15
NASA's Cassini spacecraft spied this tight trio of craters as it approached Saturn's icy moon Enceladus for a close flyby on Oct. 14, 2015. The craters, located at high northern latitudes, are sliced through by thin fractures -- part of a network of similar cracks that wrap around the snow-white moon. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 at a distance of approximately 6,000 miles (10,000 kilometers) from Enceladus. Image scale is 197 feet (60 meters) per pixel. The image was taken with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 using a spectral filter which preferentially admits wavelengths of ultraviolet light centered at 338 nanometers. http://photojournal.jpl.nasa.gov/catalog/PIA20011
NASA Technical Reports Server (NTRS)
2006-01-01
At least three different kinds of rocks await scientific analysis at the place where NASA's Mars Exploration Rover Spirit will likely spend several months of Martian winter. They are visible in this picture, which the panoramic camera on Spirit acquired during the rover's 809th sol, or Martian day, of exploring Mars (April 12, 2006). Paper-thin layers of light-toned, jagged-edged rocks protrude horizontally from beneath small sand drifts; a light gray rock with smooth, rounded edges sits atop the sand drifts; and several dark gray to black, angular rocks with vesicles (small holes) typical of hardened lava lie scattered across the sand. This view is an approximately true-color rendering that combines images taken through the panoramic camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.NASA Astrophysics Data System (ADS)
Aparanji, Santosh; Balaswamy, V.; Arun, S.; Supradeepa, V. R.
2018-02-01
In this work, we report and analyse the surprising observation of a rainbow of visible colors, spanning 390nm to 620nm, in silica-based, Near Infrared, continuous-wave, cascaded Raman fiber lasers. The cascaded Raman laser is pumped at 1117nm at around 200W and at full power we obtain 100 W at 1480nm. With increasing pump power at 1117nm, the fiber constituting the Raman laser glows in various hues along its length. From spectroscopic analysis of the emitted visible light, it was identified to be harmonic and sum-frequency components of various locally propagating wavelength components. In addition to third harmonic components, surprisingly, even 2nd harmonic components were observed. Despite being a continuous-wave laser, we expect the phase-matching occurring between the core-propagating NIR light with the cladding-propagating visible wavelengths and the intensity fluctuations characteristic of Raman lasers to have played a major role in generation of visible light. In addition, this surprising generation of visible light provides us a powerful non-contact method to deduce the spectrum of light propagating in the fiber. Using static images of the fiber captured by a standard visible camera such as a DSLR, we demonstrate novel, image-processing based techniques to deduce the wavelength component propagating in the fiber at any given spatial location. This provides a powerful diagnostic tool for both length and power resolved spectral analysis in Raman fiber lasers. This helps accurate prediction of the optimal length of fiber required for complete and efficient conversion to a given Stokes wavelength.
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
City lights of London, England taken during Expedition Six
2003-02-04
ISS006-E-22939 (4 February 2003) --- City lights of London, England were captured with a digital still camera by one of the Expedition Six crewmembers on the International Space Station (ISS). This nighttime view of the British capital shows the city;s urban density and infrastructure as highlighted by electrical lighting. Beyond lie isolated bright areas marking the numerous smaller cities and towns of the region and as far southeast as Hastings on the coast. London;s two major airports, Heathrow and Gatwick, are visible to the south of the city.
[A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].
Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe
2015-04-01
The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.
New Views of a Familiar Beauty
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 2Figure 3Figure 4Figure 5 This image composite compares the well-known visible-light picture of the glowing Trifid Nebula (left panel) with infrared views from NASA's Spitzer Space Telescope (remaining three panels). The Trifid Nebula is a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius. The false-color Spitzer images reveal a different side of the Trifid Nebula. Where dark lanes of dust are visible trisecting the nebula in the visible-light picture, bright regions of star-forming activity are seen in the Spitzer pictures. All together, Spitzer uncovered 30 massive embryonic stars and 120 smaller newborn stars throughout the Trifid Nebula, in both its dark lanes and luminous clouds. These stars are visible in all the Spitzer images, mainly as yellow or red spots. Embryonic stars are developing stars about to burst into existence. Ten of the 30 massive embryos discovered by Spitzer were found in four dark cores, or stellar 'incubators,' where stars are born. Astronomers using data from the Institute of Radioastronomy millimeter telescope in Spain had previously identified these cores but thought they were not quite ripe for stars. Spitzer's highly sensitive infrared eyes were able to penetrate all four cores to reveal rapidly growing embryos. Astronomers can actually count the individual embryos tucked inside the cores by looking closely at the Spitzer image taken by its infrared array camera (figure 4). This instrument has the highest spatial resolution of Spitzer's imaging cameras. The Spitzer image from the multiband imaging photometer (figure 5), on the other hand, specializes in detecting cooler materials. Its view highlights the relatively cool core material falling onto the Trifid's growing embryos. The middle panel is a combination of Spitzer data from both of these instruments. The embryos are thought to have been triggered by a massive 'type O' star, which can be seen as a white spot at the center of the nebula in all four images. Type O stars are the most massive stars, ending their brief lives in explosive supernovas. The small newborn stars probably arose at the same time as the O star, and from the same original cloud of gas and dust. The Spitzer infrared array camera image is a three-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 and 8.0 microns (red). The Spitzer multiband imaging photometer image (figure 3) shows 24-micron emissions. The Spitzer mosaic image combines data from these pictures, showing light of 4.5 microns (blue), 8.0 microns (green) and 24 microns (red). The visible-light image (figure 2) is from the National Optical Astronomy Observatory, Tucson, Ariz.Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
Capturing latent fingerprints from metallic painted surfaces using UV-VIS spectroscope
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Scheidat, Tobias; Vielhauer, Claus
2015-03-01
In digital crime scene forensics, contactless non-destructive detection and acquisition of latent fingerprints by means of optical devices such as a high-resolution digital camera, confocal microscope, or chromatic white-light sensor is the initial step prior to destructive chemical development. The applicability of an optical sensor to digitalize latent fingerprints primarily depends on reflection properties of a substrate. Metallic painted surfaces, for instance, pose a problem for conventional sensors which make use of visible light. Since metallic paint is a semi-transparent layer on top of the surface, visible light penetrates it and is reflected off of the metallic flakes randomly disposed in the paint. Fingerprint residues do not impede light beams making ridges invisible. Latent fingerprints can be revealed, however, using ultraviolet light which does not penetrate the paint. We apply a UV-VIS spectroscope that is capable of capturing images within the range from 163 to 844 nm using 2048 discrete levels. We empirically show that latent fingerprints left behind on metallic painted surfaces become clearly visible within the range from 205 to 385 nm. Our proposed streakiness score feature determining the proportion of a ridge-valley pattern in an image is applied for automatic assessment of a fingerprint's visibility and distinguishing between fingerprint and empty regions. The experiments are carried out with 100 fingerprint and 100 non-fingerprint samples.
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
Lee, Onseok; Park, Sunup; Kim, Jaeyoung; Oh, Chilhwan
2017-11-01
The visual scoring method has been used as a subjective evaluation of pigmentary skin disorders. Severity of pigmentary skin disease, especially melasma, is evaluated using a visual scoring method, the MASI (melasma area severity index). This study differentiates between epidermal and dermal pigmented disease. The study was undertaken to determine methods to quantitatively measure the severity of pigmentary skin disorders under ultraviolet illumination. The optical imaging system consists of illumination (white LED, UV-A lamp) and image acquisition (DSLR camera, air cooling CMOS CCD camera). Each camera is equipped with a polarizing filter to remove glare. To analyze images of visible and UV light, images are divided into frontal, cheek, and chin regions of melasma patients. Each image must undergo image processing. To reduce the curvature error in facial contours, a gradient mask is used. The new method of segmentation of front and lateral facial images is more objective for face-area-measurement than the MASI score. Image analysis of darkness and homogeneity is adequate to quantify the conventional MASI score. Under visible light, active lesion margins appear in both epidermal and dermal melanin, whereas melanin is found in the epidermis under UV light. This study objectively analyzes severity of melasma and attempts to develop new methods of image analysis with ultraviolet optical imaging equipment. Based on the results of this study, our optical imaging system could be used as a valuable tool to assess the severity of pigmentary skin disease. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Laser speckle visibility acoustic spectroscopy in soft turbid media
NASA Astrophysics Data System (ADS)
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-03-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light and the speckle visibility[2] is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam.[3] It may be applied to other kinds of acoustic wave in different forms of turbid soft matter, such as biological tissues, pastes or concentrated emulsions. Now at Université Lyon 1 (ILM).
The Explosive Counterparts of Gravitational Waves (Silent Animation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Astronomy collaborations like the Dark Energy Survey, which Fermilab leads, can track down the visible sources of gravitational waves caused by binary neutron stars. This animation, presented here without sound, takes you through the collision of two neutron stars, and shows you the explosion of light and energy seen by the Dark Energy Camera on August 17, 2017.
Examining spring phenology of forest understory using digital photography
Liang Liang; Mark D. Schwartz; Songlin Fei
2011-01-01
Phenology is an important indicator of forest health in relation to energy/nutrient cycles and species interactions. Accurate characterization of forest understory phenology is a crucial part of forest phenology observation. In this study, ground plots set up in a temperate mixed forest in Wisconsin were observed with a visible-light digital camera during spring 2007....
NASA Astrophysics Data System (ADS)
Rizvi, Sadiq; Ley, Peer-Phillip; Knöchelmann, Marvin; Lachmayer, Roland
2018-02-01
Research reveals that visual information forms the major portion of the received data for driving. At night -owing to the, sometimes scarcity, sometime inhomogeneity of light- the human physiology and psychology experiences a dramatic alteration. It is found that although the likelihood of accident occurrence is higher during the day due to heavier traffic, the most fatal accidents still occur during night time. How can road safety be improved in limited lighting conditions using DMD-based high resolution headlamps? DMD-based pixel light systems, utilizing HID and LED light sources, are able to address hundreds of thousands of pixels individually. Using camera information, this capability allows 'glare-free' light distributions that perfectly adapt to the needs of all road users. What really enables these systems to stand out however, is their on-road image projection capability. This projection functionality may be used in co-operation with other driver assistance systems as an assist feature for the projection of navigation data, warning signs, car status information etc. Since contrast sensitivity constitutes a decisive measure of the human visual function, here is then a core question: what distributions of luminance in the projection space produce highly visible on-road image projections? This work seeks to address this question. Responses on sets of differently illuminated projections are collected from a group of participants and later interpreted using statistical data obtained using a luminance camera. Some aspects regarding the correlation between contrast ratio, symbol form and attention capture are also discussed.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA DETAILS
NASA Technical Reports Server (NTRS)
2002-01-01
These are two views of a highly active region of star birth located northeast of the central cluster, R136, in 30 Doradus. The orientation and scale are identical for both views. The top panel is a composite of images in two colors taken with the Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The bottom panel is a composite of pictures taken through three infrared filters with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). In both cases the colors of the displays were chosen to correlate with the nebula's and stars' true colors. Seven very young objects are identified with numbered arrows in the infrared image. Number 1 is a newborn, compact cluster dominated by a triple system of 'hefty' stars. It has formed within the head of a massive dust pillar pointing toward R136. The energetic outflows from R136 have shaped the pillar and triggered the collapse of clouds within its summit to form the new stars. The radiation and outflows from these new stars have in turn blown off the top of the pillar, so they can be seen in the visible-light as well as the infrared image. Numbers 2 and 3 also pinpoint newborn stars or stellar systems inside an adjacent, bright-rimmed pillar, likewise oriented toward R136. These objects are still immersed within their natal dust and can be seen only as very faint, red points in the visible-light image. They are, however, among the brightest objects in the infrared image, since dust does not block infrared light as much as visible light. Thus, numbers 2 and 3 and number 1 correspond respectively to two successive stages in the birth of massive stars. Number 4 is a very red star that has just formed within one of several very compact dust clouds nearby. Number 5 is another very young triple-star system with a surrounding cluster of fainter stars. They also can be seen in the visible-light picture. Most remarkable are the glowing patches numbered 6 and 7, which astronomers have interpreted as 'impact points' produced by twin jets of material slamming into surrounding dust clouds. These 'impact points' are perfectly aligned on opposite sides of number 5 (the triple-star system), and each is separated from the star system by about 5 light-years. The jets probably originate from a circumstellar disk around one of the young stars in number 5. They may be rotating counterclockwise, thus producing moving, luminous patches on the surrounding dust, like a searchlight creating spots on clouds. These infrared patches produced by jets from a massive, young star are a new astronomical phenomenon. Credits for NICMOS image: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Spirit Scans Winter Haven (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
At least three different kinds of rocks await scientific analysis at the place where NASA's Mars Exploration Rover Spirit will likely spend several months of Martian winter. They are visible in this picture, which the panoramic camera on Spirit acquired during the rover's 809th sol, or Martian day, of exploring Mars (April 12, 2006). Paper-thin layers of light-toned, jagged-edged rocks protrude horizontally from beneath small sand drifts; a light gray rock with smooth, rounded edges sits atop the sand drifts; and several dark gray to black, angular rocks with vesicles (small holes) typical of hardened lava lie scattered across the sand. This view is a false-color rendering that combines images taken through the panoramic camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE ...
7. VIEW OF TIP TOP AND PHILLIPS MINES. PHOTO MADE FROM THE 'NOTTINGHAM' SADDLE VISIBLE IN PHOTOGRAPHS ID-31-3 AND ID-31-6. CAMERA POINTED NORTHEAST TIP TOP IS CLEARLY VISIBLE IN UPPER RIGHT; RUNNING A STRAIGHT EDGE THROUGH THE TRUNK LINE OF SMALL TREE IN LOWER RIGHT THROUGH TRUNK LINE OF LARGER TREE WILL DIRECT ONE TO LIGHT AREA WHERE TIP TOP IS LOCATED; BLACK SQUARE IS THE RIGHT WINDOW ON WEST SIDE (FRONT) OF STRUCTURE. PHILLIPS IS VISIBLE BY FOLLOWING TREE LINE DIAGONALLY THROUGH IMAGE TO FAR LEFT SIDE. SULLIVAN IS HIDDEN IN THE TREE TO THE RIGHT OF PHILLIPS. - Florida Mountain Mining Sites, Silver City, Owyhee County, ID
Ultraviolet laser beam monitor using radiation responsive crystals
McCann, Michael P.; Chen, Chung H.
1988-01-01
An apparatus and method for monitoring an ultraviolet laser beam includes disposing in the path of an ultraviolet laser beam a substantially transparent crystal that will produce a color pattern in response to ultraviolet radiation. The crystal is exposed to the ultraviolet laser beam and a color pattern is produced within the crystal corresponding to the laser beam intensity distribution therein. The crystal is then exposed to visible light, and the color pattern is observed by means of the visible light to determine the characteristics of the laser beam that passed through crystal. In this manner, a perpendicular cross sectional intensity profile and a longitudinal intensity profile of the ultraviolet laser beam may be determined. The observation of the color pattern may be made with forward or back scattered light and may be made with the naked eye or with optical systems such as microscopes and television cameras.
Spitzer Makes 'Invisible' Visible
NASA Technical Reports Server (NTRS)
2004-01-01
Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon.HUBBLE PROVIDES 'ONE-TWO PUNCH' TO SEE BIRTH OF STARS IN GALACTIC WRECKAGE
NASA Technical Reports Server (NTRS)
2002-01-01
Two powerful cameras aboard NASA's Hubble Space Telescope teamed up to capture the final stages in the grand assembly of galaxies. The photograph, taken by the Advanced Camera for Surveys (ACS) and the revived Near Infrared Camera and Multi-Object Spectrometer (NICMOS), shows a tumultuous collision between four galaxies located 1 billion light-years from Earth. The galactic car wreck is creating a torrent of new stars. The tangled up galaxies, called IRAS 19297-0406, are crammed together in the center of the picture. IRAS 19297-0406 is part of a class of galaxies known as ultraluminous infrared galaxies (ULIRGs). ULIRGs are considered the progenitors of massive elliptical galaxies. ULIRGs glow fiercely in infrared light, appearing 100 times brighter than our Milky Way Galaxy. The large amount of dust in these galaxies produces the brilliant infrared glow. The dust is generated by a firestorm of star birth triggered by the collisions. IRAS 19297-0406 is producing about 200 new Sun-like stars every year -- about 100 times more stars than our Milky Way creates. The hotbed of this star formation is the central region [the yellow objects]. This area is swamped in the dust created by the flurry of star formation. The bright blue material surrounding the central region corresponds to the ultraviolet glow of new stars. The ultraviolet light is not obscured by dust. Astronomers believe that this area is creating fewer new stars and therefore not as much dust. The colliding system [yellow and blue regions] has a diameter of about 30,000 light-years, or about half the size of the Milky Way. The tail [faint blue material at left] extends out for another 20,000 light-years. Astronomers used both cameras to witness the flocks of new stars that are forming from the galactic wreckage. NICMOS penetrated the dusty veil that masks the intense star birth in the central region. ACS captured the visible starlight of the colliding system's blue outer region. IRAS 19297-0406 may be similar to the so-called Hickson compact groups -- clusters of at least four galaxies in a tight configuration that are isolated from other galaxies. The galaxies are so close together that they lose energy from the relentless pull of gravity. Eventually, they fall into each other and form one massive galaxy. This color-composite image was made by combining photographs taken in near-infrared light with NICMOS and ultraviolet and visible light with ACS. The pictures were taken with these filters: the H-band and J-band on NICMOS; the V-band on the ACS wide-field camera; and the U-band on the ACS high-resolution camera. The images were taken on May 13 and 14. Credits: NASA, the NICMOS Group (STScI, ESA), and the NICMOS Science Team (University of Arizona)
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2 This image composite compares infrared and visible views of the famous Orion nebula and its surrounding cloud, an industrious star-making region located near the hunter constellation's sword. The infrared picture is from NASA's Spitzer Space Telescope, and the visible image is from the National Optical Astronomy Observatory, headquartered in Tucson, Ariz. In addition to Orion, two other nebulas can be seen in both pictures. The Orion nebula, or M42, is the largest and takes up the lower half of the images; the small nebula to the upper left of Orion is called M43; and the medium-sized nebula at the top is NGC 1977. Each nebula is marked by a ring of dust that stands out in the infrared view. These rings make up the walls of cavities that are being excavated by radiation and winds from massive stars. The visible view of the nebulas shows gas heated by ultraviolet radiation from the massive stars. Above the Orion nebula, where the massive stars have not yet ejected much of the obscuring dust, the visible image appears dark with only a faint glow. In contrast, the infrared view penetrates the dark lanes of dust, revealing bright swirling clouds and numerous developing stars that have shot out jets of gas (green). This is because infrared light can travel through dust, whereas visible light is stopped short by it. The infrared image shows light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight.About Jupiter's Reflectance Function in JunoCam Images
NASA Astrophysics Data System (ADS)
Eichstaedt, G.; Orton, G. S.; Momary, T.; Hansen, C. J.; Caplinger, M.
2017-09-01
NASA's Juno spacecraft has successfully completed several perijove passes. JunoCam is Juno's visible light and infrared camera. It was added to the instrument complement to investigate Jupiter's polar regions, and for education and public outreach purposes. Images of Jupiter taken by JunoCam have been revealing effects that can be interpreted as caused by a haze layer. This presumed haze layer appears to be structured, and it partially obscures Jupiter's cloud top. With empirical investigation of Jupiter's reflectance function we intend to separate light contributed by haze from light reflected off Jupiter's cloud tops, enabling both layers to be investigated separately.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors
NASA Astrophysics Data System (ADS)
Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David
2018-02-01
After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.
Optimization of subcutaneous vein contrast enhancement
NASA Astrophysics Data System (ADS)
Zeman, Herbert D.; Lovhoiden, Gunnar; Deshmukh, Harshal
2000-05-01
A technique for enhancing the contrast of subcutaneous veins has been demonstrated. This techniques uses a near IR light source and one or more IR sensitive CCD TV cameras to produce a contrast enhanced image of the subcutaneous veins. This video image of the veins is projected back onto the patient's skin using a n LCD video projector. The use of an IR transmitting filter in front of the video cameras prevents any positive feedback from the visible light from the video projector from causing instabilities in the projected image. The demonstration contrast enhancing illuminator has been tested on adults and children, both Caucasian and African-American, and it enhances veins quite well in all cases. The most difficult cases are those where significant deposits of subcutaneous fat are present which make the veins invisible under normal room illumination. Recent attempts to see through fat using different IR wavelength bands and both linearly and circularly polarized light were unsuccessful. The key to seeing through fat turns out to be a very diffuse source of RI light. Results on adult and pediatric subjects are shown with this new IR light source.
Multi-Wavelength Views of Messier 81
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Click on individual images below for larger view [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] The magnificent spiral arms of the nearby galaxy Messier 81 are highlighted in this image from NASA's Spitzer Space Telescope. Located in the northern constellation of Ursa Major (which also includes the Big Dipper), this galaxy is easily visible through binoculars or a small telescope. M81 is located at a distance of 12 million light-years.The main image is a composite mosaic obtained with the multiband imaging photometer for Spitzer and the infrared array camera. Thermal infrared emission at 24 microns detected by the photometer (red, bottom left inset) is combined with camera data at 8.0 microns (green, bottom center inset) and 3.6 microns (blue, bottom right inset).A visible-light image of Messier 81, obtained at Kitt Peak National Observatory, a ground-based telescope, is shown in the upper right inset. Both the visible-light picture and the 3.6-micron near-infrared image trace the distribution of stars, although the Spitzer image is virtually unaffected by obscuring dust. Both images reveal a very smooth stellar mass distribution, with the spiral arms relatively subdued.As one moves to longer wavelengths, the spiral arms become the dominant feature of the galaxy. The 8-micron emission is dominated by infrared light radiated by hot dust that has been heated by nearby luminous stars. Dust in the galaxy is bathed by ultraviolet and visible light from nearby stars. Upon absorbing an ultraviolet or visible-light photon, a dust grain is heated and re-emits the energy at longer infrared wavelengths. The dust particles are composed of silicates (chemically similar to beach sand), carbonaceous grains and polycyclic aromatic hydrocarbons and trace the gas distribution in the galaxy. The well-mixed gas (which is best detected at radio wavelengths) and dust provide a reservoir of raw materials for future star formation.The 24-micron multiband imaging photometer image shows emission from warm dust heated by the most luminous young stars. The infrared-bright clumpy knots within the spiral arms show where massive stars are being born in giant H II (ionized hydrogen) regions. Studying the locations of these star forming regions with respect to the overall mass distribution and other constituents of the galaxy (e.g., gas) will help identify the conditions and processes needed for star formation.Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung
2016-12-16
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.
Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung
2016-01-01
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods. PMID:27999301
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-01-01
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775
A study on a portable fluorescence imaging system
NASA Astrophysics Data System (ADS)
Chang, Han-Chao; Wu, Wen-Hong; Chang, Chun-Li; Huang, Kuo-Cheng; Chang, Chung-Hsing; Chiu, Shang-Chen
2011-09-01
The fluorescent reaction is that an organism or dye, excited by UV light (200-405 nm), emits a specific frequency of light; the light is usually a visible or near infrared light (405-900 nm). During the UV light irradiation, the photosensitive agent will be induced to start the photochemical reaction. In addition, the fluorescence image can be used for fluorescence diagnosis and then photodynamic therapy can be given to dental diseases and skin cancer, which has become a useful tool to provide scientific evidence in many biomedical researches. However, most of the methods on acquiring fluorescence biology traces are still stay in primitive stage, catching by naked eyes and researcher's subjective judgment. This article presents a portable camera to obtain the fluorescence image and to make up a deficit from observer competence and subjective judgment. Furthermore, the portable camera offers the 375nm UV-LED exciting light source for user to record fluorescence image and makes the recorded image become persuasive scientific evidence. In addition, when the raising the rate between signal and noise, the signal processing module will not only amplify the fluorescence signal up to 70 %, but also decrease the noise significantly from environmental light on bill and nude mouse testing.
2015-12-09
This representation of Ceres' Occator Crater in false colors shows differences in the surface composition. Red corresponds to a wavelength range around 0.97 micrometers (near infrared), green to a wavelength range around 0.75 micrometers (red, visible light) and blue to a wavelength range of around 0.44 micrometers (blue, visible light). Occator measures about 60 miles (90 kilometers) wide. Scientists use false color to examine differences in surface materials. The color blue on Ceres is generally associated with bright material, found in more than 130 locations, and seems to be consistent with salts, such as sulfates. It is likely that silicate materials are also present. The images were obtained by the framing camera on NASA's Dawn spacecraft from a distance of about 2,700 miles (4,400 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA20180
Calibration method for video and radiation imagers
Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN
2011-07-05
The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.
NASA's AVIRIS Instrument Sheds New Light on Southern California Wildfires
2017-12-08
NASA's Airborne Visible Infrared Imaging Spectrometer instrument (AVIRIS), flying aboard a NASA Armstrong Flight Research Center high-altitude ER-2 aircraft, flew over the wildfires burning in Southern California on Dec. 5, 2017 and acquired this false-color image. Active fires are visible in red, ground surfaces are in green and smoke is in blue. AVIRIS is an imaging spectrometer that observes light in visible and infrared wavelengths, measuring the full spectrum of radiated energy. Unlike regular cameras with three colors, AVIRIS has 224 spectral channels from the visible through the shortwave infrared. This permits mapping of fire temperatures, fractional coverage, and surface properties, including how much fuel is available for a fire. Spectroscopy is also valuable for characterizing forest drought conditions and health to assess fire risk. AVIRIS has been observing fire-prone areas in Southern California for many years, forming a growing time series of before/after data cubes. These data are helping improve scientific understanding of fire risk and how ecosystems respond to drought and fire. https://photojournal.jpl.nasa.gov/catalog/PIA11243
Europe's space camera unmasks a cosmic gamma-ray machine
NASA Astrophysics Data System (ADS)
1996-11-01
The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce brighter flashes when the electrons hit a phosphor screen. Since Hubble's launch in 1990, the Faint Object Camera has examined many different kinds of cosmic objects, from the moons of Jupiter to remote galaxies and quasars. When the space telescope's optics were corrected at the end of 1993 the Faint Object Camera immediately celebrated the event with the discovery of primeval helium in intergalactic gas. In their search for Pulsar 1055-52, the astronomers chose a near-ultraviolet filter to sharpen the Faint Object Camera's vision and reduce the adjacent star's huge advantage in intensity. In May 1996, the Hubble Space Telescope operators aimed at the spot which radio astronomers had indicated, as the source of the radio pulsations of Pulsar 1055-52. The neutron star appeared precisely in the centre of the field of view, and it was clearly separated from the glare of the adjacent star. At magnitude 24.9, Pulsar 1055-52 was comfortably within the power of the Faint Object Camera, which can see stars 20 times fainter still. "The Faint Object Camera is the instrument of choice for looking for neutron stars," says Giovanni Bignami, speaking on behalf of the Italian team. "Whenever it points to a judiciously selected neutron star it detects the corresponding visible or ultraviolet light. The Faint Object Camera has now identified three neutron stars in that way, including Pulsar 1055-52, and it has examined a few that were first detected by other instruments." Mysteries of the neutron stars The importance of the new result can be gauged by the tally of only eight neutron stars seen so far at optical wavelengths, compared with about 760 known from their radio pulsations, and about 21 seen emitting X-rays. Since the first pulsar was detected by radio astronomers in Cambridge, England, nearly 30 years ago, theorists have come to recognize neutron stars as fantastic objects. They are veritable cosmic laboratories in which Nature reveals the behaviour of matter under extreme stress, just one step short of a black hole. A neutron star is created by the force of a supernova explosion in a large star, which crushes the star's core to an unimaginable density. A mass greater than the Sun's is squeezed into a ball no wider than a city. The gravity and magnetic fields are billions of times stronger than the Earth's. The neutron star revolves rapidly, which causes it to wink like a cosmic lighthouse as it swivels its magnetic poles towards and away from the Earth. Pulsar 1055-52 spins at five revolutions per second. At its formation in a supernova explosion, a neutron star is endowed with two main forms of energy. One is heat, at temperatures of millions of degrees, which the neutron star radiates mainly as X-rays, with only a small proportion emerging as visible light. The other power supply for the neutron star comes from its high rate of spin and a gradual slowing of the rotation. By a variety of processes involving the magnetic field and accelerated particles in the neutron star's vicinity, the spin energy of the neutron star is converted into radiation at many different wavelengths, from radio waves to gamma-rays. The exceptional gamma-ray intensity of Pulsar 1055-52 was first appreciated in observations by NASA's Compton Gamma Ray Observatory. The team in Milan recently used the Hubble Space Telescope to find the distance of the peculiar neutron star Geminga, which is not detectable by radio pulses but is a strong source of gamma-rays (see ESA Information Note 04-96, 28 March 1996). Pulsar 1055-52 is even more powerful in that respect. About 50 per cent of its radiant energy is gamma-rays, compared with 15 per cent from Geminga and 0.1 per cent from the famous Crab Pulsar, the first neutron star seen by visible light. Making the gamma-rays requires the acceleration of electrons through billions of volts. The magnetic environment of Pulsar 1055-52 fashions a natural gamma-ray machine of amazing power. The orientation of the neutron star's magnetic field with respect to the Earth may contribute to its brightness in gamma-rays. Geminga, Pulsar 1055-52 and another object, Pulsar 0656+14, make a trio that the Milanese astronomers call the Three Musketeers. All have been observed with the Faint Object Camera. They are isolated, elderly neutron stars, some hundreds of thousands of years old, contrasting with the 942 year-old Crab Pulsar which is still surrounded by dispersing debris of a supernova seen by Chinese astronomers in the 11th Century. The mysteries of the neutron stars will keep astronomers busy for years to come, and the Faint Object Camera in the Hubble Space Telescope will remain the best instrument for spotting their faint visible light. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA). The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) for NASA, under contract with the Goddard Space Flight Center, Greenbelt, Maryland. Note to editors: An image is available of (i) PSR 1055-52 seen by ESA's Faint Object Camera in the Hubble Space Telescope, and (ii) the same region of the sky seen by the European Southern Observatory's New Technology Telescope, with the position of PSR 1055-52 indicated. The image is available on the World Wide Web at http://ecf.hq.eso.org/stecf-pubrel.html http://www.estec.esa.nl/spdwww/h2000/html/snlmain.htm
Earth taken by Galileo after completing its first Earth Gravity Assist
NASA Technical Reports Server (NTRS)
1990-01-01
Near-infrared photograph of Earth was taken by Galileo spacecraft at 6:07 am Pacific Standard Time (PST), 12-11-90, at a range of about 1.32 million miles. Camera used light with a wavelength of 1 micron, which easily penetrates atmospheric hazes and enhances the brightness of land surfaces. South America is prominent near the center; at the top, the East Coast of the United States, including Florida is visible. The West Coast of Africa is visible on the horizon at right. Photo provided by the Jet Propulsion Laboratory (JPL) with alternate number P-37328, 12-19-90.
2005-01-17
This Cassini image shows predominantly the impact-scarred leading hemisphere of Saturn's icy moon Rhea (1,528 kilometers, or 949 miles across). The image was taken in visible light with the Cassini spacecraft narrow angle camera on Dec. 12, 2004, at a distance of 2 million kilometers (1.2 million miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 30 degrees. The image scale is about 12 kilometers (7.5 miles) per pixel. The image has been magnified by a factor of two and contrast enhanced to aid visibility. http://photojournal.jpl.nasa.gov/catalog/PIA06564
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
South Melea Planum, By The Dawn's Early Light
NASA Technical Reports Server (NTRS)
1999-01-01
MOC 'sees' by the dawn's early light! This picture was taken over the high southern polar latitudes during the first week of May 1999. The area shown is currently in southern winter darkness. Because sunlight is scattered over the horizon by aerosols--dust and ice particles--suspended in the atmosphere, sufficient light reaches regions within a few degrees of the terminator (the line dividing night and day) to be visible to the Mars Global Surveyor Mars Orbiter Camera (MOC) when the maximum exposure settings are used. This image shows a bright, wispy cloud hanging over southern Malea Planum. This cloud would not normally be visible, since it is currently in darkness. At the time this picture was taken, the sun was more than 5.7o below the northern horizon. The scene covers an area 3 kilometers (1.9 miles) wide. Again, the illumination is from the top. In this frame, the surface appears a relatively uniform gray. At the time the picture was acquired, the surface was covered with south polar wintertime frost. The highly reflective frost, in fact, may have contributed to the increased visibility of this surface. This 'twilight imaging' technique for viewing Mars can only work near the terminator; thus in early May only regions between about 67oS and 74oS were visible in twilight images in the southern hemisphere, and a similar narrow latitude range could be imaged in the northern hemisphere. MOC cannot 'see' in the total darkness of full-borne night. Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...
2016-05-16
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.
Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A
2014-11-01
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.
The Use of Gamma-Ray Imaging to Improve Portal Monitor Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziock, Klaus-Peter; Collins, Jeff; Fabris, Lorenzo
2008-01-01
We have constructed a prototype, rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. Our Roadside Tracker uses automated target acquisition and tracking (TAT) software to identify and track vehicles in visible light images. The field of view of the visible camera overlaps with and is calibrated to that of a one-dimensional gamma-ray imager. The TAT code passes information on when vehicles enter and exit the system field of view and when they cross gamma-ray pixel boundaries. Based on this in-formation, the gamma-ray imager "harvests"more » the gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. In this fashion we are able to generate vehicle-specific radiation signatures and avoid source confusion problems that plague nonimaging approaches to the same problem.« less
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D
Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...
2014-08-26
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less
2004-09-07
Lonely Mimas swings around Saturn, seeming to gaze down at the planet's splendid rings. The outermost, narrow F ring is visible here and exhibits some clumpy structure near the bottom of the frame. The shadow of Saturn's southern hemisphere stretches almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. The image was taken with the Cassini spacecraft narrow angle camera on August 15, 2004, at a distance of 8.8 million kilometers (5.5 million miles) from Saturn, through a filter sensitive to visible red light. The image scale is 53 kilometers (33 miles) per pixel. Contrast was slightly enhanced to aid visibility.almost entirely across the rings. Mimas is 398 kilometers (247 miles) wide. http://photojournal.jpl.nasa.gov/catalog/PIA06471
Instruments for Reading Direct-Marked Data-Matrix Symbols
NASA Technical Reports Server (NTRS)
Schramm, Harry F.; Corder, Eric L.
2006-01-01
Improved optoelectronic instruments (specially configured digital cameras) for reading direct-marked data-matrix symbols on the surfaces of optically reflective objects (including specularly reflective ones) are undergoing development. Data-matrix symbols are two-dimensional binary patterns that are used, like common bar codes, for automated identification of objects. The first data-matrix symbols were checkerboard-like patterns of black-and-white rectangles, typically existing in the forms of paint, ink, or detachable labels. The major advantage of direct marking (the marks are more durable than are painted or printed symbols or detachable labels) is offset by a major disadvantage (the marks generated by some marking methods do not provide sufficient contrast to be readable by optoelectronic instruments designed to read black-and-white data-matrix symbols). Heretofore, elaborate lighting, lensing, and software schemes have been tried in efforts to solve the contrast problem in direct-mark matrix- symbol readers. In comparison with prior readers based on those schemes, the readers now undergoing development are expected to be more effective while costing less. All of the prior direct-mark matrix-symbol readers are designed to be aimed perpendicularly to marked target surfaces, and they tolerate very little angular offset. However, the reader now undergoing development not only tolerates angular offset but depends on angular offset as a means of obtaining the needed contrast, as described below. The prototype reader (see Figure 1) includes an electronic camera in the form of a charge-coupled-device (CCD) image detector equipped with a telecentric lens. It also includes a source of collimated visible light and a source of collimated infrared light for illuminating a target. The visible and infrared illumination complement each other: the visible illumination is more useful for aiming the reader toward a target, while the infrared illumination is more useful for reading symbols on highly reflective surfaces. By use of beam splitters, the visible and infrared collimated lights are introduced along the optical path of the telecentric lens, so that the target is illuminated and viewed from the same direction.
Hubble Provides Infrared View of Jupiter's Moon, Ring, and Clouds
NASA Technical Reports Server (NTRS)
1997-01-01
Probing Jupiter's atmosphere for the first time, the Hubble Space Telescope's new Near Infrared Camera and Multi-Object Spectrometer (NICMOS) provides a sharp glimpse of the planet's ring, moon, and high-altitude clouds.
The presence of methane in Jupiter's hydrogen- and helium-rich atmosphere has allowed NICMOS to plumb Jupiter's atmosphere, revealing bands of high-altitude clouds. Visible light observations cannot provide a clear view of these high clouds because the underlying clouds reflect so much visible light that the higher level clouds are indistinguishable from the lower layer. The methane gas between the main cloud deck and the high clouds absorbs the reflected infrared light, allowing those clouds that are above most of the atmosphere to appear bright. Scientists will use NICMOS to study the high altitude portion of Jupiter's atmosphere to study clouds at lower levels. They will then analyze those images along with visible light information to compile a clearer picture of the planet's weather. Clouds at different levels tell unique stories. On Earth, for example, ice crystal (cirrus) clouds are found at high altitudes while water (cumulus) clouds are at lower levels.Besides showing details of the planet's high-altitude clouds, NICMOS also provides a clear view of the ring and the moon, Metis. Jupiter's ring plane, seen nearly edge-on, is visible as a faint line on the upper right portion of the NICMOS image. Metis can be seen in the ring plane (the bright circle on the ring's outer edge). The moon is 25 miles wide and about 80,000 miles from Jupiter.Because of the near-infrared camera's narrow field of view, this image is a mosaic constructed from three individual images taken Sept. 17, 1997. The color intensity was adjusted to accentuate the high-altitude clouds. The dark circle on the disk of Jupiter (center of image) is an artifact of the imaging system.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/The Two-faced Whirlpool Galaxy
2017-12-08
NASA image release January 13, 2011 These images by NASA's Hubble Space Telescope show off two dramatically different face-on views of the spiral galaxy M51, dubbed the Whirlpool Galaxy. The image here, taken in visible light, highlights the attributes of a typical spiral galaxy, including graceful, curving arms, pink star-forming regions, and brilliant blue strands of star clusters. In the image above, most of the starlight has been removed, revealing the Whirlpool's skeletal dust structure, as seen in near-infrared light. This new image is the sharpest view of the dense dust in M51. The narrow lanes of dust revealed by Hubble reflect the galaxy's moniker, the Whirlpool Galaxy, as if they were swirling toward the galaxy's core. To map the galaxy's dust structure, researchers collected the galaxy's starlight by combining images taken in visible and near-infrared light. The visible-light image captured only some of the light; the rest was obscured by dust. The near-infrared view, however, revealed more starlight because near-infrared light penetrates dust. The researchers then subtracted the total amount of starlight from both images to see the galaxy's dust structure. The red color in the near-infrared image traces the dust, which is punctuated by hundreds of tiny clumps of stars, each about 65 light-years wide. These stars have never been seen before. The star clusters cannot be seen in visible light because dense dust enshrouds them. The image reveals details as small as 35 light-years across. Astronomers expected to see large dust clouds, ranging from about 100 light-years to more than 300 light-years wide. Instead, most of the dust is tied up in smooth and diffuse dust lanes. An encounter with another galaxy may have prevented giant clouds from forming. Probing a galaxy's dust structure serves as an important diagnostic tool for astronomers, providing invaluable information on how the gas and dust collapse to form stars. Although Hubble is providing incisive views of the internal structure of galaxies such as M51, the planned James Webb Space Telescope (JWST) is expected to produce even crisper images. Researchers constructed the image by combining visible-light exposures from Jan. 18 to 22, 2005, with the Advanced Camera for Surveys (ACS), and near-infrared light pictures taken in December 2005 with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook Credit: NASA, ESA, M. Regan and B. Whitmore (STScI), and R. Chandar (University of Toledo)
The Two-faced Whirlpool Galaxy
2011-01-13
NASA image release January 13, 2011 These images by NASA's Hubble Space Telescope show off two dramatically different face-on views of the spiral galaxy M51, dubbed the Whirlpool Galaxy. The image above, taken in visible light, highlights the attributes of a typical spiral galaxy, including graceful, curving arms, pink star-forming regions, and brilliant blue strands of star clusters. In the image here, most of the starlight has been removed, revealing the Whirlpool's skeletal dust structure, as seen in near-infrared light. This new image is the sharpest view of the dense dust in M51. The narrow lanes of dust revealed by Hubble reflect the galaxy's moniker, the Whirlpool Galaxy, as if they were swirling toward the galaxy's core. To map the galaxy's dust structure, researchers collected the galaxy's starlight by combining images taken in visible and near-infrared light. The visible-light image captured only some of the light; the rest was obscured by dust. The near-infrared view, however, revealed more starlight because near-infrared light penetrates dust. The researchers then subtracted the total amount of starlight from both images to see the galaxy's dust structure. The red color in the near-infrared image traces the dust, which is punctuated by hundreds of tiny clumps of stars, each about 65 light-years wide. These stars have never been seen before. The star clusters cannot be seen in visible light because dense dust enshrouds them. The image reveals details as small as 35 light-years across. Astronomers expected to see large dust clouds, ranging from about 100 light-years to more than 300 light-years wide. Instead, most of the dust is tied up in smooth and diffuse dust lanes. An encounter with another galaxy may have prevented giant clouds from forming. Probing a galaxy's dust structure serves as an important diagnostic tool for astronomers, providing invaluable information on how the gas and dust collapse to form stars. Although Hubble is providing incisive views of the internal structure of galaxies such as M51, the planned James Webb Space Telescope (JWST) is expected to produce even crisper images. Researchers constructed the image by combining visible-light exposures from Jan. 18 to 22, 2005, with the Advanced Camera for Surveys (ACS), and near-infrared light pictures taken in December 2005 with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Credit: NASA, ESA, S. Beckwith (STScI), and the Hubble Heritage Team (STScI/AURA) The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
Spitzer Makes Invisible Visible
2004-04-13
Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon. http://photojournal.jpl.nasa.gov/catalog/PIA05734
NASA Astrophysics Data System (ADS)
Kawashima, Natsumi; Hosono, Satsuki; Ishimaru, Ichiro
2016-05-01
We proposed the snapshot-type Fourier spectroscopic imaging for smartphone that was mentioned in 1st. report in this conference. For spectroscopic components analysis, such as non-invasive blood glucose sensors, the diffuse reflection lights from internal human skins are very weak for conventional hyperspectral cameras, such as AOTF (Acousto-Optic Tunable Filter) type. Furthermore, it is well known that the spectral absorption of mid-infrared lights or Raman spectroscopy especially in long wavelength region is effective to distinguish specific biomedical components quantitatively, such as glucose concentration. But the main issue was that photon energies of middle infrared lights and light intensities of Raman scattering are extremely weak. For improving sensitivity of our spectroscopic imager, the wide-field-stop & beam-expansion method was proposed. Our line spectroscopic imager introduced a single slit for field stop on the conjugate objective plane. Obviously to increase detected light intensities, the wider slit width of the field stop makes light intensities higher, regardless of deterioration of spatial resolutions. Because our method is based on wavefront-division interferometry, it becomes problems that the wider width of single slit makes the diffraction angle narrower. This means that the narrower diameter of collimated objective beams deteriorates visibilities of interferograms. By installing the relative inclined phaseshifter onto optical Fourier transform plane of infinity corrected optical systems, the collimated half flux of objective beams derived from single-bright points on objective surface penetrate through the wedge prism and the cuboid glass respectively. These two beams interfere each other and form the infererogram as spatial fringe patterns. Thus, we installed concave-cylindrical lens between the wider slit and objective lens as a beam expander. We successfully obtained the spectroscopic characters of hemoglobin from reflected lights from human fingers.
Trade-off between TMA and RC configurations for JANUS camera
NASA Astrophysics Data System (ADS)
Greggio, D.; Magrin, D.; Munari, M.; Paolinetti, R.; Turella, A.; Zusi, M.; Cremonese, G.; Debei, S.; Della Corte, V.; Friso, E.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Mugnuolo, R.; Olivieri, A.; Palumbo, P.; Ragazzoni, R.; Schmitz, N.
2016-07-01
JANUS (Jovis Amorum Ac Natorum Undique Scrutator) is a high-resolution visible camera designed for the ESA space mission JUICE (Jupiter Icy moons Explorer). The main scientific goal of JANUS is to observe the surface of the Jupiter satellites Ganymede and Europa in order to characterize their physical and geological properties. During the design phases, we have proposed two possible optical configurations: a Three Mirror Anastigmat (TMA) and a Ritchey-Chrétien (RC) both matching the performance requirements. Here we describe the two optical solutions and compare their performance both in terms of achieved optical quality, sensitivity to misalignment and stray light performances.
Thermal photogrammetric imaging: A new technique for monitoring dome eruptions
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Varley, Nick; James, Mike R.
2017-05-01
Structure-from-motion (SfM) algorithms greatly facilitate the generation of 3-D topographic models from photographs and can form a valuable component of hazard monitoring at active volcanic domes. However, model generation from visible imagery can be prevented due to poor lighting conditions or surface obscuration by degassing. Here, we show that thermal images can be used in a SfM workflow to mitigate these issues and provide more continuous time-series data than visible-light equivalents. We demonstrate our methodology by producing georeferenced photogrammetric models from 30 near-monthly overflights of the lava dome that formed at Volcán de Colima (Mexico) between 2013 and 2015. Comparison of thermal models with equivalents generated from visible-light photographs from a consumer digital single lens reflex (DSLR) camera suggests that, despite being less detailed than their DSLR counterparts, the thermal models are more than adequate reconstructions of dome geometry, giving volume estimates within 10% of those derived using the DSLR. Significantly, we were able to construct thermal models in situations where degassing and poor lighting prevented the construction of models from DSLR imagery, providing substantially better data continuity than would have otherwise been possible. We conclude that thermal photogrammetry provides a useful new tool for monitoring effusive volcanic activity and assessing associated volcanic risks.
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
Penning plasma based simultaneous light emission source of visible and VUV lights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vyas, G. L., E-mail: glvyas27@gmail.com; Prakash, R.; Pal, U. N.
In this paper, a laboratory-based penning plasma discharge source is reported which has been developed in two anode configurations and is able to produce visible and VUV lights simultaneously. The developed source has simultaneous diagnostics facility using Langmuir probe and optical emission spectroscopy. The two anode configurations, namely, double ring and rectangular configurations, have been studied and compared for optimum use of the geometry for efficient light emissions and recording. The plasma is produced using helium gas and admixture of three noble gases including helium, neon, and argon. The source is capable to produce eight spectral lines for pure heliummore » in the VUV range from 20 to 60 nm and total 24 spectral lines covering the wavelength range 20–106 nm for the admixture of gases. The large range of VUV lines is generated from gaseous admixture rather from the sputtered materials. The recorded spectrum shows that the plasma light radiations in both visible and VUV range are larger in double ring configuration than that of the rectangular configurations at the same discharge operating conditions. To clearly understand the difference, the imaging of the discharge using ICCD camera and particle-in-cell simulation using VORPAL have also been carried out. The effect of ion diffusion, metastable collision with the anode wall and the nonlinear effects are correlated to explain the results.« less
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P.; Dehn, J.
2014-12-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
NASA Astrophysics Data System (ADS)
Harrild, Martin; Webley, Peter; Dehn, Jonathan
2015-04-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
Baby Picture of our Solar System
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] Click on image for Poster VersionClick on image for Visible Light ImageClick on image for Animation A rare, infrared view of a developing star and its flaring jets taken by NASA's Spitzer Space Telescope shows us what our own solar system might have looked like billions of years ago. In visible light, this star and its surrounding regions are completely hidden in darkness. Stars form out of spinning clouds, or envelopes, of gas and dust. As the envelopes flatten and collapse, jets of gas stream outward and a swirling disk of planet-forming material takes shape around the forming star. Eventually, the envelope and jets disappear, leaving a newborn star with a suite of planets. This process takes millions of years. The Spitzer image shows a developing sun-like star, called L1157, that is only thousands of years old (for comparison, our solar system is around 4.5 billion years old). Why is the young system only visible in infrared light? The answer has to do with the fact that stars are born in the darkest and dustiest corners of space, where little visible light can escape. But the heat, or infrared light, of an object can be detected through the dust. In Spitzer's infrared view of L1157, the star itself is hidden but its envelope is visible in silhouette as a thick black bar. While Spitzer can peer through this region's dust, it cannot penetrate the envelope itself. Hence, the envelope appears black. The thickest part of the envelope can be seen as the black line crossing the giant jets. This L1157 portrait provides the first clear look at stellar envelope that has begun to flatten. The color white shows the hottest parts of the jets, with temperatures around 100 degrees Celsius (212 degrees Fahrenheit). Most of the material in the jets, seen in orange, is roughly zero degrees on the Celsius and Fahrenheit scales. The reddish haze all around the picture is dust. The white dots are other stars, mostly in the background. L1157 is located 800 light-years away in the constellation Cepheus. This image was taken by Spitzer's infrared array camera. Infrared light of 8 microns is colored red; 4.5-micron infrared light is green; and 3.6-micron infrared light is blue. The visible-light picture is from the Palomar Observatory-Space Telescope Science Institute Digitized Sky Survey. Blue visible light is blue; red visible light is green, and near-infrared light is red. The artist's animation begins by showing a dark and dusty corner of space where little visible light can escape. The animation then transitions to the infrared view taken by NASA's Spitzer Space Telescope, revealing the embryonic star and its dramatic jets.NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
Time-of-flight range imaging for underwater applications
NASA Astrophysics Data System (ADS)
Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias
2018-02-01
Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.
Medium resolution spectra of the shuttle glow in the visible region of the spectrum
NASA Technical Reports Server (NTRS)
Viereck, R. A.; Murad, E.; Pike, C. P.; Mende, S. B.; Swenson, G. R.; Culbertson, F. L.; Springer, B. C.
1992-01-01
Recent spectral measurements of the visible shuttle glow (lambda = 400 - 800 nm) at medium resolution (1 nm) reveal the same featureless continuum with a maximum near 680 nm that was reported previously. This is also in good agreement with recent laboratory experiments that attribute the glow to the emissions of NO2 formed by the recombination of O + NO. The data that are presented were taken from the aft flight deck with a hand-held spectrograph and from the shuttle bay with a low-light-level television camera. Shuttle glow images and spectra are presented and compared with laboratory data and theory.
Utilization of Android-base Smartphone to Support Handmade Spectrophotometer : A Preliminary Study
NASA Astrophysics Data System (ADS)
Ujiningtyas, R.; Apriliani, E.; Yohana, I.; Afrillianti, L.; Hikmah, N.; Kurniawan, C.
2018-04-01
Visible spectrophotometer is a powerful instrument in chemistry. We can identify the chemical species base on their specific color and then we can also determine the amount of the species using the spectrophotometer. However, the availability of visible spectrophotometer still limited, particularly for education. This affect the skill of student to have experience on handling the instrumentation. On the other hand, the communication technology creates an opportunity for student to explore their smart feature, mainly the camera. The objective of this research is to make an application that utilize the camera feature as a detector for handmade visible spectrophotometer. The software have been made based on android program, and we name it as Spectrophone®. The spectrophotometer consists of an acrylic body, sample compartment, and light sources (USB-LED lamp powered by 6600 mAh battery). Before reach the sample, the light source was filtered using colored-mica plastic. The spectrophone® apps utilize the camera to detect the color based on its RGB composition. A different colored solution will show a different RGB composition based on the concentration and specific absorbance wavelength. We then can choose one type of color composition, R or G or B only to be converted as an absorbance using -Log (Cs/Co), where Cs and Co are color composition of sample and blank, respectively. The calibration curve of metilen blue measured. In a red (R) composition, the regression is not linear (R2=0.78) compare to the result of UV-Vis spectrophotomer model Spectroquant Pharo 300 (R2=0.8053). This measurement result shows that The Spectrophone® still need to be evaluated and corrected. One problem than can we identify that the diameter of pick point of RGB composition is too wide and this will affect the reading color composition. Next, we will fix the problem and in advance we will apply this Spectrophone® in a wide scale.
In vivo performance of photovoltaic subretinal prosthesis
NASA Astrophysics Data System (ADS)
Mandel, Yossi; Goetz, George; Lavinsky, Daniel; Huie, Phil; Mathieson, Keith; Wang, Lele; Kamins, Theodore; Manivanh, Richard; Harris, James; Palanker, Daniel
2013-02-01
We have developed a photovoltaic retinal prosthesis, in which camera-captured images are projected onto the retina using pulsed near-IR light. Each pixel in the subretinal implant directly converts pulsed light into local electric current to stimulate the nearby inner retinal neurons. 30 μm-thick implants with pixel sizes of 280, 140 and 70 μm were successfully implanted in the subretinal space of wild type (WT, Long-Evans) and degenerate (Royal College of Surgeons, RCS) rats. Optical Coherence Tomography and fluorescein angiography demonstrated normal retinal thickness and healthy vasculature above the implants upon 6 months follow-up. Stimulation with NIR pulses over the implant elicited robust visual evoked potentials (VEP) at safe irradiance levels. Thresholds increased with decreasing pulse duration and pixel size: with 10 ms pulses it went from 0.5 mW/mm2 on 280 μm pixels to 1.1 mW/mm2 on 140 μm pixels, to 2.1 mW/mm2 on 70 μm pixels. Latency of the implant-evoked VEP was at least 30 ms shorter than in response evoked by the visible light, due to lack of phototransduction. Like with the visible light stimulation in normal sighted animals, amplitude of the implant-induced VEP increased logarithmically with peak irradiance and pulse duration. It decreased with increasing frequency similar to the visible light response in the range of 2 - 10 Hz, but decreased slower than the visible light response at 20 - 40 Hz. Modular design of the photovoltaic arrays allows scalability to a large number of pixels, and combined with the ease of implantation, offers a promising approach to restoration of sight in patients blinded by retinal degenerative diseases.
Hubble Space Telescope, Faint Object Camera
NASA Technical Reports Server (NTRS)
1981-01-01
This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.
Invisible ink mark detection in the visible spectrum using absorption difference.
Lee, Joong; Kong, Seong G; Kang, Tae-Yi; Kim, Byounghyun; Jeon, Oc-Yeub
2014-03-01
One of popular techniques in gambling fraud involves the use of invisible ink marks printed on the back surface of playing cards. Such covert patterns are transparent in the visible spectrum and therefore invisible to unaided human eyes. Invisible patterns can be made visible with ultraviolet (UV) illumination or a CCD camera installed with an infrared (IR) filter depending on the type of ink materials used. Cheating gamers often wear contact lenses or eyeglasses made of IR or UV filters to recognize the secret marks on the playing cards. This paper presents an image processing technique to reveal invisible ink patterns in the visible spectrum without the aid of special equipment such as UV lighting or IR filters. A printed invisible ink pattern leaves a thin coating on the surface with different refractive index for different wavelengths of light, which results in color dispersion or absorption difference. The proposed method finds the differences of color components caused by absorption difference to detect invisible ink patterns on the surface. Experiment results show that the proposed scheme is effective for both UV-active and IR-active invisible ink materials. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
NASA Astrophysics Data System (ADS)
Heldmann, Jennifer L.; Lamb, Justin; Asturias, Daniel; Colaprete, Anthony; Goldstein, David B.; Trafton, Laurence M.; Varghese, Philip L.
2015-07-01
The LCROSS (Lunar Crater Observation and Sensing Satellite) impacted the Cabeus crater near the lunar South Pole on 9 October 2009 and created an impact plume that was observed by the LCROSS Shepherding Spacecraft. Here we analyze data from the ultraviolet-visible spectrometer and visible context camera aboard the spacecraft. We use these data to constrain a numerical model to understand the physical evolution of the resultant plume. The UV-visible light curve peaks in brightness 18 s after impact and then decreases in radiance but never returns to the pre-impact radiance value for the ∼4 min of observation by the Shepherding Spacecraft. The blue:red spectral ratio increases in the first 10 s, decreases over the following 50 s, remains constant for approximately 150 s, and then begins to increase again ∼180 s after impact. Constraining the modeling results with spacecraft observations, we conclude that lofted dust grains remained suspended above the lunar surface for the entire 250 s of observation after impact. The impact plume was composed of both a high angle spike and low angle plume component. Numerical modeling is used to evaluate the relative effects of various plume parameters to further constrain the plume properties when compared with the observational data. Dust particle sizes lofted above the lunar surface were micron to sub-micron in size. Water ice particles were also contained within the ejecta cloud and simultaneously photo-dissociated and sublimated after reaching sunlight.
Infrared and visible cooperative vehicle identification markings
NASA Astrophysics Data System (ADS)
O'Keefe, Eoin S.; Raven, Peter N.
2006-05-01
Airborne surveillance helicopters and aeroplanes used by security and defence forces around the world increasingly rely on their visible band and thermal infrared cameras to prosecute operations such as the co-ordination of police vehicles during the apprehension of a stolen car, or direction of all emergency services at a serious rail crash. To perform their function effectively, it is necessary for the airborne officers to unambiguously identify police and the other emergency service vehicles. In the visible band, identification is achieved by placing high contrast symbols and characters on the vehicle roof. However, at the wavelengths at which thermal imagers operate, the dark and light coloured materials have similar low reflectivity and the visible markings cannot be discerned. Hence there is a requirement for a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared, over a large range of viewing angles. In this paper we discuss the design, detailed angle-dependent spectroscopic characterisation and operation of novel visible and infrared vehicle marking materials, and present airborne IR and visible imagery of materials in use.
Simulation of laser beam reflection at the sea surface modeling and validation
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2013-06-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for the pre-calculation of images for cameras operating in different spectral wavebands (visible, short wave infrared) for a bistatic configuration of laser source and receiver for different atmospheric conditions. In the visible waveband the calculated detected total power of reflected laser light from a 660nm laser source is compared with data collected in a field trial. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser beam reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the view of a camera the sea surface radiance must be calculated for the specific waveband. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). Validation of simulation results is prerequisite before applying the computer simulation to maritime laser applications. For validation purposes data (images and meteorological data) were selected from field measurements, using a 660nm cw-laser diode to produce laser beam reflection at the water surface and recording images by a TV camera. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam.
NASA Astrophysics Data System (ADS)
Rechmann, P.; Liou, Shasan W.; Rechmann, Beate M.; Featherstone, John D.
2014-02-01
Gingivitis due to microbial plaque and calculus can lead over time if left untreated to advanced periodontal disease with non-physiological pocket formation. Removal of microbial plaque in the gingivitis stage typically achieves gingival health. The SOPROCARE camera system emits blue light at 450 nm wavelength using three blue diodes. The 450 nm wavelength is located in the non-ionizing, visible spectral wavelength region and thus is not dangerous. It is assumed that using the SOPROCARE camera in perio-mode inflamed gingiva can easily be observed and inflammation can be scored due to fluorescence from porphyrins in blood. The assumption is also that illumination of microbial plaque with blue light induces fluorescence due to the bacteria and porphyrin content of the plaque and thus can help to make microbial plaque and calculus visible. Aim of the study with 55 subjects was to evaluate the ability of the SOPROCARE fluorescence camera system to detect, visualize and allow scoring of microbial plaque in comparison to the Turesky modification of the Quigley and Hein plaque index. A second goal was to detect and score gingival inflammation and correlated the findings to the Silness and Löe gingival inflammation index. The study showed that scoring of microbial plaque as well as gingival inflammation levels similar to the established Turesky modified Quigley Hein index and the Silness and Löe gingival inflammation index can easily be done using the SOPROCARE fluorescence system in periomode. Linear regression fits between the different clinical indices and SOPROCARE scores in fluorescence perio-mode revealed the system's capacity for effective discrimination between scores.
Digital holographic interferometry applied to the investigation of ignition process.
Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B
2017-06-12
We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] 3-Panel Version Figure 1 [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] Visible Light Figure 2 Infrared (IRAC) Figure 3 Combined Figure 4 Two rambunctious young stars are destroying their natal dust cloud with powerful jets of radiation, in an infrared image from NASA's Spitzer Space Telescope. The stars are located approximately 600 light-years away in a cosmic cloud called BHR 71. In visible light (left panel), BHR 71 is just a large black structure. The burst of yellow light toward the bottom of the cloud is the only indication that stars might be forming inside. In infrared light (center panel), the baby stars are shown as the bright yellow smudges toward the center. Both of these yellow spots have wisps of green shooting out of them. The green wisps reveal the beginning of a jet. Like a rainbow, the jet begins as green, then transitions to orange, and red toward the end. The combined visible-light and infrared composite (right panel) shows that a young star's powerful jet is responsible for the rupture at the bottom of the dense cloud in the visible-light image. Astronomers know this because burst of light in the visible-light image overlaps exactly with a jet spouting-out of the left star, in the infrared image. The jets' changing colors reveal a cooling effect, and may suggest that the young stars are spouting out radiation in regular bursts. The green tints at the beginning of the jet reveal really hot hydrogen gas, the orange shows warm gas, and the reddish wisps at the end represent the coolest gas. The fact that gas toward the beginning of the jet is hotter than gas near the middle suggests that the stars must give off regular bursts of energy -- and the material closest to the star is being heated by shockwaves from a recent stellar outburst. Meanwhile, the tints of orange reveal gas that is currently being heated by shockwaves from a previous stellar outburst. By the time these shockwaves reach the end of the jet, they have slowed down so significantly that the gas is only heated a little, and looks red. The combination of views also brings out some striking details that evaded visible-light detection. For example, the yellow dots scattered throughout the image are actually young stars forming inside BHR 71. Spitzer also uncovered another young star with jets, located to the right of the powerful jet seen in the visible-light image. Spitzer can see details that visible-light telescopes don't, because its infrared instruments are sensitive to 'heat.' The infrared image is made up of data from Spitzer's infrared array camera. Blue shows infrared light at 3.6 microns, green is light at 4.5 microns, and red is light at 8.0 microns.DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Connor, J., Cradick, J.
In fiscal year 2012, it was desired to combine a visible spectrometer with a streak camera to form a diagnostic system for recording time-resolved spectra generated in light gas gun experiments. Acquiring a new spectrometer was an option, but it was possible to borrow an existing unit for a period of months, which would be sufficient to evaluate both “off-line” and in-gas gun shots. If it proved adequate for this application, it could be duplicated (with possible modifications); if not, such testing would help determine needed specifications for another model. This report describes the evaluation of the spectrometer (separately andmore » combined with the NSTec LO streak camera) for this purpose. Spectral and temporal resolutions were of primary interest. The first was measured with a monochromatic laser input. The second was ascertained by the combination of the spectrometer’s spatial resolution in the time-dispersive direction and the streak camera’s intrinsic temporal resolution. System responsivity was also important, and this was investigated by measuring the response of the spectrometer/camera system to black body input—the gas gun experiments are expected to be similar to a 3000K black body—as well as measuring the throughput of the spectrometer separately over a range of visible light provided by a monochromator. The flat field (in wavelength) was also measured and the final part of the evaluation was actual fielding on two gas gun shots. No firm specifications for spectral or temporal resolution were defined precisely, but these were desired to be in the 1–2 nm and 1–2 ns ranges, respectively, if possible. As seen below, these values were met or nearly met, depending on wavelength. Other performance parameters were also not given (threshold requirements) but the evaluations performed with laser, black body, and successful gas gun shots taken in aggregate indicate that the spectrometer is adequate for this purpose. Even still, some (relatively minor) opportunities for improvement were noticed and these were documented for incorporation into any near-duplicate spectrometer that might be fabricated in the future.« less
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2015-12-01
The ability to detect and monitor precursory events, thermal signatures, and ongoing volcanic activity in near-realtime is an invaluable tool. Volcanic hazards often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash to aircraft cruise altitudes. Using ground based remote sensing to detect and monitor this activity is essential, but the required equipment is often expensive and difficult to maintain, which increases the risk to public safety and the likelihood of financial impact. Our investigation explores the use of 'off the shelf' cameras, ranging from computer webcams to low-light security cameras, to monitor volcanic incandescent activity in near-realtime. These cameras are ideal as they operate in the visible and near-infrared (NIR) portions of the electromagnetic spectrum, are relatively cheap to purchase, consume little power, are easily replaced, and can provide telemetered, near-realtime data. We focus on the early detection of volcanic activity, using automated scripts that capture streaming online webcam imagery and evaluate each image according to pixel brightness, in order to automatically detect and identify increases in potentially hazardous activity. The cameras used here range in price from 0 to 1,000 and the script is written in Python, an open source programming language, to reduce the overall cost to potential users and increase the accessibility of these tools, particularly in developing nations. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures to be correlated to pixel brightness. Data collected from several volcanoes; (1) Stromboli, Italy (2) Shiveluch, Russia (3) Fuego, Guatemala (4) Popcatépetl, México, along with campaign data from Stromboli (June, 2013), and laboratory tests are presented here.
Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System
NASA Astrophysics Data System (ADS)
Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.
2017-12-01
The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.
A multi-channel setup to study fractures in scintillators
NASA Astrophysics Data System (ADS)
Tantot, A.; Bouard, C.; Briche, R.; Lefèvre, G.; Manier, B.; Zaïm, N.; Deschanel, S.; Vanel, L.; Di Stefano, P. C. F.
2016-12-01
To investigate fractoluminescence in scintillating crystals used for particle detection, we have developed a multi-channel setup built around samples of double-cleavage drilled compression (DCDC) geometry in a controllable atmosphere. The setup allows the continuous digitization over hours of various parameters, including the applied load, and the compressive strain of the sample, as well as the acoustic emission. Emitted visible light is recorded with nanosecond resolution, and crack propagation is monitored using infrared lighting and camera. An example of application to \\text{B}{{\\text{i}}4}\\text{G}{{\\text{e}}3}{{\\text{O}}12} (BGO) is provided.
Phosphor thermography technique in hypersonic wind tunnel - Feasibility study
NASA Astrophysics Data System (ADS)
Edy, J. L.; Bouvier, F.; Baumann, P.; Le Sant, Y.
Probative research has been undertaken at ONERA on a new technique of thermography in hypersonic wind tunnels. This method is based on the heat sensitivity of a luminescent coating applied to the model. The luminescent compound, excited by UV light, emits visible light, the properties of which depend on the phosphor temperature, among other factors. Preliminary blowdown wind tunnel tests have been performed, firstly for spot measurements and then for cartographic measurements using a 3-CCD video camera, a BETACAM video recorder and a digital image processing system. The results provide a good indication of the method feasibility.
NASA Technical Reports Server (NTRS)
Onate, Bryan
2016-01-01
The International Space Station (ISS) will soon have a platform for conducting fundamental research of Large Plants. Plant Habitat (PH) is designed to be a fully controllable environment for high-quality plant physiological research. PH will control light quality, level, and timing, temperature, CO2, relative humidity, and irrigation, while scrubbing ethylene. Additional capabilities include leaf temperature and root zone moisture and oxygen sensing. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs. There will be several internal cameras (visible and IR) to monitor and record plant growth and operations.
NASA Astrophysics Data System (ADS)
Liu, Z. X.; Xu, X. Q.; Gao, X.; Xia, T. Y.; Joseph, I.; Meyer, W. H.; Liu, S. C.; Xu, G. S.; Shao, L. M.; Ding, S. Y.; Li, G. Q.; Li, J. G.
2014-09-01
Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.
NASA Astrophysics Data System (ADS)
Kobayashi, Y.; Watanabe, K.; Imai, M.; Watanabe, K.; Naruse, N.; Takahashi, Y.
2016-12-01
Hyper-densely monitoring for poor-visibility occurred by snowstorm is needed to make an alert system, because the snowstorm is difficult to predict from the observation only at a representative point. There are some problems in the previous approaches for the poor-visibility monitoring using video analyses or visibility meters; these require a wired network monitoring (a large amount of data: 10MB/sec at least) and the system cost is high (10,000 at each point). Thus, the risk of poor-visibility has been mainly measured at specific point such as airport and mountain pass, and estimated by simulation two dimensionally. To predict it two dimensionally and accurately, we have developed a low-cost meteorological system to observe the snowstorm hyper-densely. We have developed a low-cost visibility meter which works as the reduced intensity of semiconducting laser light when snow particles block off. Our developed system also has a capability of extending a hyper-densely observation in real-time on wireless network using Zigbee; A/D conversion and wireless data sent from temperature and illuminance sensors. We use a semiconducting laser chip (5) for the light source and a reflection mechanism by the use of three mirrors so as to send the light to a non-sensitive illuminance sensor directly. Thus, our visibility detecting system ($500) becomes much cheaper than previous one. We have checked the correlation between the reduced intensity taken by our system and the visibility recorded by conventional video camera. The value for the correlation coefficient was -0.67, which indicates a strong correlation. It means that our developed system is practical. In conclusion, we have developed low-cost meteorological detecting system to observe poor-visibility occurred by snowstorm, having a potential of hyper-densely monitoring on wireless network, and have made sure the practicability.
An Accreting Protoplanet: Confirmation and Characterization of LkCa15b
NASA Astrophysics Data System (ADS)
Follette, Katherine; Close, Laird; Males, Jared; Macintosh, Bruce; Sallum, Stephanie; Eisner, Josh; Kratter, Kaitlin M.; Morzinski, Katie; Hinz, Phil; Weinberger, Alycia; Rodigas, Timothy J.; Skemer, Andrew; Bailey, Vanessa; Vaz, Amali; Defrere, Denis; spalding, eckhart; Tuthill, Peter
2015-12-01
We present a visible light adaptive optics direct imaging detection of a faint point source separated by just 93 milliarcseconds (~15 AU) from the young star LkCa 15. Using Magellan AO's visible light camera in Simultaneous Differential Imaging (SDI) mode, we imaged the star at Hydrogen alpha and in the neighboring continuum as part of the Giant Accreting Protoplanet Survey (GAPplanetS) in November 2015. The continuum images provide a sensitive and simultaneous probe of PSF residuals and instrumental artifacts, allowing us to isolate H-alpha accretion luminosity from the LkCa 15b protoplanet, which lies well inside of the LkCa15 transition disk gap. This detection, combined with a nearly simultaneous near-infrared detection with the Large Binocular Telescope, provides an unprecedented glimpse at a planetary system during epoch of planet formation. [Nature result in press. Please embargo until released
Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)
NASA Technical Reports Server (NTRS)
Frank, L. A.
1997-01-01
The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.
NASA Astrophysics Data System (ADS)
Ou, Yangwei; Zhang, Hongbo; Li, Bin
2018-04-01
The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.
Verri, G
2009-06-01
The photo-induced luminescence properties of Egyptian blue, Han blue and Han purple were investigated by means of near-infrared digital imaging. These pigments emit infrared radiation when excited in the visible range. The emission can be recorded by means of a modified commercial digital camera equipped with suitable glass filters. A variety of visible light sources were investigated to test their ability to excite luminescence in the pigments. Light-emitting diodes, which do not emit stray infrared radiation, proved an excellent source for the excitation of luminescence in all three compounds. In general, the use of visible radiation emitters with low emission in the infrared range allowed the presence of the pigments to be determined and their distribution to be spatially resolved. This qualitative imaging technique can be easily applied in situ for a rapid characterisation of materials. The results were compared to those for Egyptian green and for historical and modern blue pigments. Examples of the application of the technique on polychrome works of art are presented.
InfraCAM (trade mark): A Hand-Held Commercial Infrared Camera Modified for Spaceborne Applications
NASA Technical Reports Server (NTRS)
Manitakos, Daniel; Jones, Jeffrey; Melikian, Simon
1996-01-01
In 1994, Inframetrics introduced the InfraCAM(TM), a high resolution hand-held thermal imager. As the world's smallest, lightest and lowest power PtSi based infrared camera, the InfraCAM is ideal for a wise range of industrial, non destructive testing, surveillance and scientific applications. In addition to numerous commercial applications, the light weight and low power consumption of the InfraCAM make it extremely valuable for adaptation to space borne applications. Consequently, the InfraCAM has been selected by NASA Lewis Research Center (LeRC) in Cleveland, Ohio, for use as part of the DARTFire (Diffusive and Radiative Transport in Fires) space borne experiment. In this experiment, a solid fuel is ignited in a low gravity environment. The combustion period is recorded by both visible and infrared cameras. The infrared camera measures the emission from polymethyl methacrylate, (PMMA) and combustion products in six distinct narrow spectral bands. Four cameras successfully completed all qualification tests at Inframetrics and at NASA Lewis. They are presently being used for ground based testing in preparation for space flight in the fall of 1995.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
NASA Astrophysics Data System (ADS)
Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; Young, L. A.; Stern, S. A.
2017-05-01
Light curves produced from color observations taken during New Horizons' approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5° to 15.1°, sub-observer latitude of 51.2 °N to 51.5 °N, and a sub-solar latitude of 41.2°N. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.;
2016-01-01
Light curves produced from color observations taken during New Horizons approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5 degrees to 15.1 degrees, sub-observer latitude of 51.2 degrees North to 51.5 degrees North, and a sub-solar latitude of 41.2 degrees North. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.
Automatic visibility retrieval from thermal camera images
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan
2017-10-01
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
Contrast enhancement for in vivo visible reflectance imaging of tissue oxygenation.
Crane, Nicole J; Schultz, Zachary D; Levin, Ira W
2007-08-01
Results are presented illustrating a straightforward algorithm to be used for real-time monitoring of oxygenation levels in blood cells and tissue based on the visible spectrum of hemoglobin. Absorbance images obtained from the visible reflection of white light through separate red and blue bandpass filters recorded by monochrome charge-coupled devices (CCDs) are combined to create enhanced images that suggest a quantitative correlation between the degree of oxygenated and deoxygenated hemoglobin in red blood cells. The filter bandpass regions are chosen specifically to mimic the color response of commercial 3-CCD cameras, representative of detectors with which the operating room laparoscopic tower systems are equipped. Adaptation of this filter approach is demonstrated for laparoscopic donor nephrectomies in which images are analyzed in terms of real-time in vivo monitoring of tissue oxygenation.
2018-01-15
In this view, individual layers of haze can be distinguished in the upper atmosphere of Titan, Saturn's largest moon. Titan's atmosphere features a rich and complex chemistry originating from methane and nitrogen and evolving into complex molecules, eventually forming the smog that surrounds the moon. This natural color image was taken in visible light with the Cassini spacecraft wide-angle camera on March 31, 2005, at a distance of approximately 20,556 miles (33,083 kilometers) from Titan. The view looks toward the north polar region on the moon's night side. Part of Titan's sunlit crescent is visible at right. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21902
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
Mars Odyssey Observes Martian Moons
2018-02-22
Phobos and Deimos, the moons of Mars, are seen by the Mars Odyssey orbiter's Thermal Emission Imaging System, or THEMIS, camera. The images were taken in visible-wavelength light. THEMIS also recorded thermal-infrared imagery in the same scan. The apparent motion is due to progression of the camera's pointing during the 17-second span of the February 15, 2018, observation, not from motion of the two moons. This was the second observation of Phobos by Mars Odyssey; the first was on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. The distance to Phobos from Odyssey during the observation was about 3,489 miles (5,615 kilometers). The distance to Deimos from Odyssey during the observation was about 12,222 miles (19,670 kilometers). An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22248
Portable widefield imaging device for ICG-detection of the sentinel lymph node
NASA Astrophysics Data System (ADS)
Govone, Angelo Biasi; Gómez-García, Pablo Aurelio; Carvalho, André Lopes; Capuzzo, Renato de Castro; Magalhães, Daniel Varela; Kurachi, Cristina
2015-06-01
Metastasis is one of the major cancer complications, since the malignant cells detach from the primary tumor and reaches other organs or tissues. The sentinel lymph node (SLN) is the first lymphatic structure to be affected by the malignant cells, but its location is still a great challenge for the medical team. This occurs due to the fact that the lymph nodes are located between the muscle fibers, making it visualization difficult. Seeking to aid the surgeon in the detection of the SLN, the present study aims to develop a widefield fluorescence imaging device using the indocyanine green as fluorescence marker. The system is basically composed of a 780nm illumination unit, optical components for 810nm fluorescence detection, two CCD cameras, a laptop, and dedicated software. The illumination unit has 16 diode lasers. A dichroic mirror and bandpass filters select and deliver the excitation light to the interrogated tissue, and select and deliver the fluorescence light to the camera. One camera is responsible for the acquisition of visible light and the other one for the acquisition of the ICG fluorescence. The software developed at the LabVIEW® platform generates a real time merged image where it is possible to observe the fluorescence spots, related to the lymph nodes, superimposed at the image under white light. The system was tested in a mice model, and a first patient with tongue cancer was imaged. Both results showed the potential use of the presented fluorescence imaging system assembled for sentinel lymph node detection.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Characterization of flotation color by machine vision
NASA Astrophysics Data System (ADS)
Siren, Ari
1999-09-01
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
VISTA Captures Celestial Cat's Hidden Secrets
NASA Astrophysics Data System (ADS)
2010-04-01
The Cat's Paw Nebula, NGC 6334, is a huge stellar nursery, the birthplace of hundreds of massive stars. In a magnificent new ESO image taken with the Visible and Infrared Survey Telescope for Astronomy (VISTA) at the Paranal Observatory in Chile, the glowing gas and dust clouds obscuring the view are penetrated by infrared light and some of the Cat's hidden young stars are revealed. Towards the heart of the Milky Way, 5500 light-years from Earth in the constellation of Scorpius (the Scorpion), the Cat's Paw Nebula stretches across 50 light-years. In visible light, gas and dust are illuminated by hot young stars, creating strange reddish shapes that give the object its nickname. A recent image by ESO's Wide Field Imager (WFI) at the La Silla Observatory (eso1003) captured this visible light view in great detail. NGC 6334 is one of the most active nurseries of massive stars in our galaxy. VISTA, the latest addition to ESO's Paranal Observatory in the Chilean Atacama Desert, is the world's largest survey telescope (eso0949). It works at infrared wavelengths, seeing right through much of the dust that is such a beautiful but distracting aspect of the nebula, and revealing objects hidden from the sight of visible light telescopes. Visible light tends to be scattered and absorbed by interstellar dust, but the dust is nearly transparent to infrared light. VISTA has a main mirror that is 4.1 metres across and it is equipped with the largest infrared camera on any telescope. It shares the spectacular viewing conditions with ESO's Very Large Telescope (VLT), which is located on the nearby summit. With this powerful instrument at their command, astronomers were keen to see the birth pains of the big young stars in the Cat's Paw Nebula, some nearly ten times the mass of the Sun. The view in the infrared is strikingly different from that in visible light. With the dust obscuring the view far less, they can learn much more about how these stars form and develop in their first few million years of life. VISTA's very wide field of view allows the whole star-forming region to be imaged in one shot with much greater clarity than ever before. The VISTA image is filled with countless stars of our Milky Way galaxy overlaid with spectacular tendrils of dark dust that are seen here fully for the first time. The dust is sufficiently thick in places to block even the near-infrared radiation to which VISTA's camera is sensitive. In many of the dusty areas, such as those close to the centre of the picture, features that appear orange are apparent - evidence of otherwise hidden active young stars and their accompanying jets. Further out though, slightly older stars are laid bare to VISTA's vision, revealing the processes taking them from their first nuclear fusion along the unsteady path of the first few million years of their lives. The VISTA telescope is now embarking on several big surveys of the southern sky that will take years to complete. The telescope's large mirror, high quality images, sensitive camera and huge field of view make it by far the most powerful infrared survey telescope on Earth. As this striking image shows, VISTA will keep astronomers busy analysing data they could not have otherwise acquired. This cat is out of the bag. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
2017-07-28
Cassini gazed toward high southern latitudes near Saturn's south pole to observe ghostly curtains of dancing light -- Saturn's southern auroras, or southern lights. These natural light displays at the planet's poles are created by charged particles raining down into the upper atmosphere, making gases there glow. The dark area at the top of this scene is Saturn's night side. The auroras rotate from left to right, curving around the planet as Saturn rotates over about 70 minutes, compressed here into a movie sequence of about five seconds. Background stars are seen sliding behind the planet. Cassini was moving around Saturn during the observation, keeping its gaze fixed on a particular spot on the planet, which causes a shift in the distant background over the course of the observation. Some of the stars seem to make a slight turn to the right just before disappearing. This effect is due to refraction -- the starlight gets bent as it passes through the atmosphere, which acts as a lens. Random bright specks and streaks appearing from frame to frame are due to charged particles and cosmic rays hitting the camera detector. The aim of this observation was to observe seasonal changes in the brightness of Saturn's auroras, and to compare with the simultaneous observations made by Cassini's infrared and ultraviolet imaging spectrometers. The original images in this movie sequence have a size of 256x256 pixels; both the original size and a version enlarged to 500x500 pixels are available here. The small image size is the result of a setting on the camera that allows for shorter exposure times than full-size (1024x1024 pixel) images. This enabled Cassini to take more frames in a short time and still capture enough photons from the auroras for them to be visible. The images were taken in visible light using the Cassini spacecraft narrow-angle camera on July 20, 2017, at a distance of about 620,000 miles (1 million kilometers) from Saturn. The views look toward 74 degrees south latitude on Saturn. Image scale is about 0.9 mile (1.4 kilometers) per pixel on Saturn. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21623
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
The Two Moons of Mars As Seen from 'Husband Hill'
NASA Technical Reports Server (NTRS)
2005-01-01
Taking advantage of extra solar energy collected during the day, NASA's Mars Exloration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. Spirit took this succession of images at 150-second intervals from a perch atop 'Husband Hill' in Gusev Crater on martian day, or sol, 594 (Sept. 4, 2005), as the faster-moving martian moon Phobos was passing Deimos in the night sky. Phobos is the brighter object on the left and Deimos is the dimmer object on the right. The bright star Aldebaran and some other stars in the constellation Taurus are visible as star trails. Most of the other streaks in the image are the result of cosmic rays lighting up random groups of pixels in the camera. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this c omposite with its panoramic camera using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Violent flickering in Black Holes
NASA Astrophysics Data System (ADS)
2008-10-01
Unique observations of the flickering light from the surroundings of two black holes provide new insights into the colossal energy that flows at their hearts. By mapping out how well the variations in visible light match those in X-rays on very short timescales, astronomers have shown that magnetic fields must play a crucial role in the way black holes swallow matter. Flickering black hole ESO PR Photo 36/08 Flickering black hole Like the flame from a candle, light coming from the surroundings of a black hole is not constant -- it flares, sputters and sparkles. "The rapid flickering of light from a black hole is most commonly observed at X-ray wavelengths," says Poshak Gandhi, who led the international team that reports these results. "This new study is one of only a handful to date that also explore the fast variations in visible light, and, most importantly how these fluctuations relate to those in X-rays." The observations tracked the shimmering of the black holes simultaneously using two different instruments, one on the ground and one in space. The X-ray data were taken using NASA's Rossi X-ray Timing Explorer satellite. The visible light was collected with the high speed camera ULTRACAM, a visiting instrument at ESO's Very Large Telescope (VLT), recording up to 20 images a second. ULTRACAM was developed by team members Vik Dhillon and Tom Marsh. "These are among the fastest observations of a black hole ever obtained with a large optical telescope," says Dhillon. To their surprise, astronomers discovered that the brightness fluctuations in the visible light were even more rapid than those seen in X-rays. In addition, the visible-light and X-ray variations were found not to be simultaneous, but to follow a repeated and remarkable pattern: just before an X-ray flare the visible light dims, and then surges to a bright flash for a tiny fraction of a second before rapidly decreasing again. None of this radiation emerges directly from the black hole, but from the intense energy flows of electrically charged matter in its vicinity. The environment of a black hole is constantly being reshaped by a riotous mêlée of strong and competing forces such as gravity, magnetism and explosive pressure. As a result, light emitted by the hot flows of matter varies in brightness in a muddled and haphazard way. "But the pattern found in this new study possesses a stable structure that stands out amidst an otherwise chaotic variability, and so, it can yield vital clues about the dominant underlying physical processes in action," says team member Andy Fabian. The visible-light emission from the neighbourhoods of black holes was widely thought to be a secondary effect, with a primary X-ray outburst illuminating the surrounding gas that subsequently shone in the visible range. But if this were so, any visible-light variations would lag behind the X-ray variability, and would be much slower to peak and fade away. "The rapid visible-light flickering now discovered immediately rules out this scenario for both systems studied," asserts Gandhi. "Instead the variations in the X-ray and visible light output must have some common origin, and one very close to the black hole itself." Strong magnetic fields represent the best candidate for the dominant physical process. Acting as a reservoir, they can soak up the energy released close to the black hole, storing it until it can be discharged either as hot (multi-million degree) X-ray emitting plasma, or as streams of charged particles travelling at close to the speed of light. The division of energy into these two components can result in the characteristic pattern of X-ray and visible-light variability.
South Melea Planum, By The Dawn's Early Light
NASA Technical Reports Server (NTRS)
1999-01-01
MOC 'sees' by the dawn's early light! This picture was taken over the high southern polar latitudes during the first week of May 1999. The area shown is currently in southern winter darkness. Because sunlight is scattered over the horizon by aerosols--dust and ice particles--suspended in the atmosphere, sufficient light reaches regions within a few degrees of the terminator (the line dividing night and day) to be visible to the Mars Global Surveyor Mars Orbiter Camera (MOC) when the maximum exposure settings are used. This picture shows a polygonally-patterned surface on southern Malea Planum. At the time the picture was taken, the sun was more than 4.5o below the northern horizon. The scene covers an area 3 kilometers (1.9 miles) wide, with the illumination from the top of the picture. In this frame, the surface appears a relatively uniform gray. At the time the picture was acquired, the surface was covered with south polar wintertime frost. The highly reflective frost, in fact, may have contributed to the increased visibility of this surface. This 'twilight imaging' technique for viewing Mars can only work near the terminator; thus in early May only regions between about 67oS and 74oS were visible in twilight images in the southern hemisphere, and a similar narrow latitude range could be imaged in the northern hemisphere. MOC cannot 'see' in the total darkness of full-borne night. Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.The appearance and propagation of filaments in the private flux region in Mega Amp Spherical Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, J. R.; Fishpool, G. M.; Thornton, A. J.
2015-09-15
The transport of particles via intermittent filamentary structures in the private flux region (PFR) of plasmas in the MAST tokamak has been investigated using a fast framing camera recording visible light emission from the volume of the lower divertor, as well as Langmuir probes and IR thermography monitoring particle and power fluxes to plasma-facing surfaces in the divertor. The visible camera data suggest that, in the divertor volume, fluctuations in light emission above the X-point are strongest in the scrape-off layer (SOL). Conversely, in the region below the X-point, it is found that these fluctuations are strongest in the PFRmore » of the inner divertor leg. Detailed analysis of the appearance of these filaments in the camera data suggests that they are approximately circular, around 1–2 cm in diameter, but appear more elongated near the divertor target. The most probable toroidal quasi-mode number is between 2 and 3. These filaments eject plasma deeper into the private flux region, sometimes by the production of secondary filaments, moving at a speed of 0.5–1.0 km/s. Probe measurements at the inner divertor target suggest that the fluctuations in the particle flux to the inner target are strongest in the private flux region, and that the amplitude and distribution of these fluctuations are insensitive to the electron density of the core plasma, auxiliary heating and whether the plasma is single-null or double-null. It is found that the e-folding width of the time-average particle flux in the PFR decreases with increasing plasma current, but the fluctuations appear to be unaffected. At the outer divertor target, the fluctuations in particle and power fluxes are strongest in the SOL.« less
New Orleans after Hurricane Katrina
2005-09-08
JSC2005e37990 (8 September 2005) --- Flooding of large sections of I-610 and the I-610/I-10 interchange (center) are visible to the east of the 17th Street Canal in this image acquired on September 8, 2005 from the International Space Station. Flooded regions are dark greenish brown, while dry areas are light brown to tan. North is to top of image, which was cropped from the digital still camera's original frame, ISS011-E-12527.
OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis
NASA Technical Reports Server (NTRS)
Collins, Nick
2009-01-01
The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. A preliminary estimate of the decay timescale for one detector is that a drop of 0.1-0.2% occurs over a ten day period, indicating that relatively infrequent cal lamp exposures can mitigate the behavior to extremely low levels.
Backscatter absorption gas imaging systems and light sources therefore
Kulp, Thomas Jan [Livermore, CA; Kliner, Dahv A. V. [San Ramon, CA; Sommers, Ricky [Oakley, CA; Goers, Uta-Barbara [Campbell, NY; Armstrong, Karla M [Livermore, CA
2006-12-19
The location of gases that are not visible to the unaided human eye can be determined using tuned light sources that spectroscopically probe the gases and cameras that can provide images corresponding to the absorption of the gases. The present invention is a light source for a backscatter absorption gas imaging (BAGI) system, and a light source incorporating the light source, that can be used to remotely detect and produce images of "invisible" gases. The inventive light source has a light producing element, an optical amplifier, and an optical parametric oscillator to generate wavelength tunable light in the IR. By using a multi-mode light source and an amplifier that operates using 915 nm pump sources, the power consumption of the light source is reduced to a level that can be operated by batteries for long periods of time. In addition, the light source is tunable over the absorption bands of many hydrocarbons, making it useful for detecting hazardous gases.
IR Spectrometer Using 90-Degree Off-Axis Parabolic Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert M. Malone, Ian J. McKenna
2008-03-01
A gated spectrometer has been designed for real-time, pulsed infrared (IR) studies at the National Synchrotron Light Source at the Brookhaven National Laboratory. A pair of 90-degree, off-axis parabolic mirrors are used to relay the light from an entrance slit to an output recording camera. With an initial wavelength range of 1500–4500 nm required, gratings could not be used in the spectrometer because grating orders would overlap. A magnesium oxide prism, placed between these parabolic mirrors, serves as the dispersion element. The spectrometer is doubly telecentric. With proper choice of the air spacing between the prism and the second parabolicmore » mirror, any spectral region of interest within the InSb camera array’s sensitivity region can be recorded. The wavelengths leaving the second parabolic mirror are collimated, thereby relaxing the camera positioning tolerance. To set up the instrument, two different wavelength (visible) lasers are introduced at the entrance slit and made collinear with the optical axis via flip mirrors. After dispersion by the prism, these two laser beams are directed to tick marks located on the outside housing of the gated IR camera. This provides first-order wavelength calibration for the instrument. Light that is reflected off the front prism face is coupled into a high-speed detector to verify steady radiance during the gated spectral imaging. Alignment features include tick marks on the prism and parabolic mirrors. This instrument was designed to complement single-point pyrometry, which provides continuous time histories of a small collection of spots from shock-heated targets.« less
IR Spectrometer Using 90-degree Off-axis Parabolic Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert M. Malone, Richard, G. Hacking, Ian J. McKenna, and Daniel H. Dolan
2008-09-02
A gated spectrometer has been designed for real-time, pulsed infrared (IR) studies at the National Synchrotron Light ource at the Brookhaven National Laboratory. A pair of 90-degree, off-axis parabolic mirrors are used to relay the light from an entrance slit to an output IR recording camera. With an initial wavelength range of 1500–4500 nm required, gratings could not be used in the spectrometer because grating orders would overlap. A magnesium oxide prism, placed between these parabolic mirrors, serves as the dispersion element. The spectrometer is doubly telecentric. With proper choice of the air spacing between the prism and the secondmore » parabolic mirror, any spectral region of interest within the InSb camera array’s sensitivity region can be recorded. The wavelengths leaving the second parabolic mirror are collimated, thereby relaxing the camera positioning tolerance. To set up the instrument, two different wavelength (visible) lasers are introduced at the entrance slit and made collinear with the optical axis via flip mirrors. After dispersion by the prism, these two laser beams are directed to tick marks located on the outside housing of the gated IR camera. This provides first-order wavelength calibration for the instrument. Light that is reflected off the front prism face is coupled into a high-speed detector to verify steady radiance during the gated spectral imaging. Alignment features include tick marks on the prism and parabolic mirrors. This instrument was designed to complement singlepoint pyrometry, which provides continuous time histories of a small collection of spots from shock-heated targets.« less
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Efficient coding and detection of ultra-long IDs for visible light positioning systems.
Zhang, Hualong; Yang, Chuanchuan
2018-05-14
Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.
Design of smartphone-based spectrometer to assess fresh meat color
NASA Astrophysics Data System (ADS)
Jung, Youngkee; Kim, Hyun-Wook; Kim, Yuan H. Brad; Bae, Euiwon
2017-02-01
Based on its integrated camera, new optical attachment, and inherent computing power, we propose an instrument design and validation that can potentially provide an objective and accurate method to determine surface meat color change and myoglobin redox forms using a smartphone-based spectrometer. System is designed to be used as a reflection spectrometer which mimics the conventional spectrometry commonly used for meat color assessment. We utilize a 3D printing technique to make an optical cradle which holds all of the optical components for light collection, collimation, dispersion, and a suitable chamber. A light, which reflects a sample, enters a pinhole and is subsequently collimated by a convex lens. A diffraction grating spreads the wavelength over the camera's pixels to display a high resolution of spectrum. Pixel values in the smartphone image are translated to calibrate the wavelength values through three laser pointers which have different wavelength; 405, 532, 650 nm. Using an in-house app, the camera images are converted into a spectrum in the visible wavelength range based on the exterior light source. A controlled experiment simulating the refrigeration and shelving of the meat has been conducted and the results showed the capability to accurately measure the color change in quantitative and spectroscopic manner. We expect that this technology can be adapted to any smartphone and used to conduct a field-deployable color spectrum assay as a more practical application tool for various food sectors.
2007-07-26
A surge in brightness appears on the rings directly opposite the Sun from the Cassini spacecraft. This "opposition surge" travels across the rings as the spacecraft watches. This view looks toward the sunlit side of the rings from about 9 degrees below the ringplane. The image was taken in visible light with the Cassini spacecraft wide-angle camera on June 12, 2007 using a spectral filter sensitive to wavelengths of infrared light centered at 853 nanometers. The view was acquired at a distance of approximately 524,374 kilometers (325,830 miles) from Saturn. Image scale is 31 kilometers (19 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA08992
Laser-speckle-visibility acoustic spectroscopy in soft turbid media.
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-01-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.
Laser-speckle-visibility acoustic spectroscopy in soft turbid media
NASA Astrophysics Data System (ADS)
Wintzenrieth, Frédéric; Cohen-Addad, Sylvie; Le Merrer, Marie; Höhler, Reinhard
2014-01-01
We image the evolution in space and time of an acoustic wave propagating along the surface of turbid soft matter by shining coherent light on the sample. The wave locally modulates the speckle interference pattern of the backscattered light, which is recorded using a camera. We show both experimentally and theoretically how the temporal and spatial correlations in this pattern can be analyzed to obtain the acoustic wavelength and attenuation length. The technique is validated using shear waves propagating in aqueous foam. It may be applied to other kinds of acoustic waves in different forms of turbid soft matter such as biological tissues, pastes, or concentrated emulsions.
HUBBLE SPIES BROWN DWARFS IN NEARBY STELLAR NURSERY
NASA Technical Reports Server (NTRS)
2002-01-01
Probing deep within a neighborhood stellar nursery, NASA's Hubble Space Telescope uncovered a swarm of newborn brown dwarfs. The orbiting observatory's near-infrared camera revealed about 50 of these objects throughout the Orion Nebula's Trapezium cluster [image at right], about 1,500 light-years from Earth. Appearing like glistening precious stones surrounding a setting of sparkling diamonds, more than 300 fledgling stars and brown dwarfs surround the brightest, most massive stars [center of picture] in Hubble's view of the Trapezium cluster's central region. All of the celestial objects in the Trapezium were born together in this hotbed of star formation. The cluster is named for the trapezoidal alignment of those central massive stars. Brown dwarfs are gaseous objects with masses so low that their cores never become hot enough to fuse hydrogen, the thermonuclear fuel stars like the Sun need to shine steadily. Instead, these gaseous objects fade and cool as they grow older. Brown dwarfs around the age of the Sun (5 billion years old) are very cool and dim, and therefore are difficult for telescopes to find. The brown dwarfs discovered in the Trapezium, however, are youngsters (1 million years old). So they're still hot and bright, and easier to see. This finding, along with observations from ground-based telescopes, is further evidence that brown dwarfs, once considered exotic objects, are nearly as abundant as stars. The image and results appear in the Sept. 20 issue of the Astrophysical Journal. The brown dwarfs are too dim to be seen in a visible-light image taken by the Hubble telescope's Wide Field and Planetary Camera 2 [picture at left]. This view also doesn't show the assemblage of infant stars seen in the near-infrared image. That's because the young stars are embedded in dense clouds of dust and gas. The Hubble telescope's near-infrared camera, the Near Infrared Camera and Multi-Object Spectrometer, penetrated those clouds to capture a view of those objects. The brown dwarfs are the faintest objects in the image. Surveying the cluster's central region, the Hubble telescope spied brown dwarfs with masses equaling 10 to 80 Jupiters. Researchers think there may be less massive brown dwarfs that are beyond the limits of Hubble's vision. The near-infrared image was taken Jan. 17, 1998. Two near-infrared filters were used to obtain information on the colors of the stars at two wavelengths (1.1 and 1.6 microns). The Trapezium picture is 1 light-year across. This composite image was made from a 'mosaic' of nine separate, but adjoining images. In this false-color image, blue corresponds to warmer, more massive stars, and red to cooler, less massive stars and brown dwarfs, and stars that are heavily obscured by dust. The visible-light data were taken in 1994 and 1995. Credits for near-infrared image: NASA; K.L. Luhman (Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.); and G. Schneider, E. Young, G. Rieke, A. Cotera, H. Chen, M. Rieke, R. Thompson (Steward Observatory, University of Arizona, Tucson, Ariz.) Credits for visible-light picture: NASA, C.R. O'Dell and S.K. Wong (Rice University)
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
Upgrades and Modifications of the NASA Ames HFFAF Ballistic Range
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Wilder, Michael C.; Cornelison, Charles J.; Perez, Alfredo J.
2017-01-01
The NASA Ames Hypervelocity Free Flight Aerodynamics Facility ballistic range is described. The various configurations of the shadowgraph stations are presented. This includes the original stations with film and configurations with two different types of digital cameras. Resolution tests for the 3 shadowgraph station configurations are described. The advantages of the digital cameras are discussed, including the immediate availability of the shadowgraphs. The final shadowgraph station configuration is a mix of 26 Nikon cameras and 6 PI-MAX2 cameras. Two types of trigger light sheet stations are described visible and IR. The two gunpowders used for the NASA Ames 6.251.50 light gas guns are presented. These are the Hercules HC-33-FS powder (no longer available) and the St. Marks Powder WC 886 powder. The results from eight proof shots for the two powders are presented. Both muzzle velocities and piston velocities are 5 9 lower for the new St. Marks WC 886 powder than for the old Hercules HC-33-FS powder (no longer available). The experimental and CFD (computational) piston and muzzle velocities are in good agreement. Shadowgraph-reading software that employs template-matching pattern recognition to locate the ballistic-range model is described. Templates are generated from a 3D solid model of the ballistic-range model. The accuracy of the approach is assessed using a set of computer-generated test images.
Low-cost panoramic infrared surveillance system
NASA Astrophysics Data System (ADS)
Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George
2017-05-01
A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.
Automatic fog detection for public safety by using camera images
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Roth, Martin; Wauben, Wiel
2017-04-01
Fog and reduced visibility have considerable impact on the performance of road, maritime, and aeronautical transportation networks. The impact ranges from minor delays to more serious congestions or unavailability of the infrastructure and can even lead to damage or loss of lives. Visibility is traditionally measured manually by meteorological observers using landmarks at known distances in the vicinity of the observation site. Nowadays, distributed cameras facilitate inspection of more locations from one remote monitoring center. The main idea is, however, still deriving the visibility or presence of fog by an operator judging the scenery and the presence of landmarks. Visibility sensors are also used, but they are rather costly and require regular maintenance. Moreover, observers, and in particular sensors, give only visibility information that is representative for a limited area. Hence the current density of visibility observations is insufficient to give detailed information on the presence of fog. Cameras are more and more deployed for surveillance and security reasons in cities and for monitoring traffic along main transportation ways. In addition to this primary use of cameras, we consider cameras as potential sensors to automatically identify low visibility conditions. The approach that we follow is to use machine learning techniques to determine the presence of fog and/or to make an estimation of the visibility. For that purpose a set of features are extracted from the camera images such as the number of edges, brightness, transmission of the image dark channel, fractal dimension. In addition to these image features, we also consider meteorological variables such as wind speed, temperature, relative humidity, and dew point as additional features to feed the machine learning model. The results obtained with a training and evaluation set consisting of 10-minute sampled images for two KNMI locations over a period of 1.5 years by using decision trees methods to classify the dense fog conditions (i.e., visibility below 250 meters) show promising results (in terms of accuracy and type I and II errors). We are currently extending the approach to images obtained with traffic-monitoring cameras along highways. This is a first step to reach a solution that is closer to an operational artificial intelligence application for automatic fog alarm signaling for public safety.
An Illumination Modeling System for Human Factors Analyses
NASA Technical Reports Server (NTRS)
Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)
2002-01-01
Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.
Development of a PET/Cerenkov-light hybrid imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Hamamura, Fuka; Kato, Katsuhiko
2014-09-15
Purpose: Cerenkov-light imaging is a new molecular imaging technology that detects visible photons from high-speed electrons using a high sensitivity optical camera. However, the merit of Cerenkov-light imaging remains unclear. If a PET/Cerenkov-light hybrid imaging system were developed, the merit of Cerenkov-light imaging would be clarified by directly comparing these two imaging modalities. Methods: The authors developed and tested a PET/Cerenkov-light hybrid imaging system that consists of a dual-head PET system, a reflection mirror located above the subject, and a high sensitivity charge coupled device (CCD) camera. The authors installed these systems inside a black box for imaging the Cerenkov-light.more » The dual-head PET system employed a 1.2 × 1.2 × 10 mm{sup 3} GSO arranged in a 33 × 33 matrix that was optically coupled to a position sensitive photomultiplier tube to form a GSO block detector. The authors arranged two GSO block detectors 10 cm apart and positioned the subject between them. The Cerenkov-light above the subject is reflected by the mirror and changes its direction to the side of the PET system and is imaged by the high sensitivity CCD camera. Results: The dual-head PET system had a spatial resolution of ∼1.2 mm FWHM and sensitivity of ∼0.31% at the center of the FOV. The Cerenkov-light imaging system's spatial resolution was ∼275μm for a {sup 22}Na point source. Using the combined PET/Cerenkov-light hybrid imaging system, the authors successfully obtained fused images from simultaneously acquired images. The image distributions are sometimes different due to the light transmission and absorption in the body of the subject in the Cerenkov-light images. In simultaneous imaging of rat, the authors found that {sup 18}F-FDG accumulation was observed mainly in the Harderian gland on the PET image, while the distribution of Cerenkov-light was observed in the eyes. Conclusions: The authors conclude that their developed PET/Cerenkov-light hybrid imaging system is useful to evaluate the merits and the limitations of Cerenkov-light imaging in molecular imaging research.« less
Spitzer Spies Spectacular Sombrero
2005-05-04
NASA's Spitzer Space Telescope set its infrared eyes on one of the most famous objects in the sky, Messier 104, also called the Sombrero galaxy. In this striking infrared picture, Spitzer sees an exciting new view of a galaxy that in visible light has been likened to a "sombrero," but here looks more like a "bulls-eye." Recent observations using Spitzer's infrared array camera uncovered the bright, smooth ring of dust circling the galaxy, seen in red. In visible light, because this galaxy is seen nearly edge-on, only the near rim of dust can be clearly seen in silhouette. Spitzer's full view shows the disk is warped, which is often the result of a gravitational encounter with another galaxy, and clumpy areas spotted in the far edges of the ring indicate young star-forming regions. Spitzer's infrared view of the starlight from this galaxy, seen in blue, can pierce through obscuring murky dust that dominates in visible light. As a result, the full extent of the bulge of stars and an otherwise hidden disk of stars within the dust ring are easily seen. The Sombrero galaxy is located some 28 million light years away. Viewed from Earth, it is just six degrees south of its equatorial plane. Spitzer detected infrared emission not only from the ring, but from the center of the galaxy too, where there is a huge black hole, believed to be a billion times more massive than our Sun. This picture is composed of four images taken at 3.6 (blue), 4.5 (green), 5.8 (orange), and 8.0 (red) microns. The contribution from starlight (measured at 3.6 microns) has been subtracted from the 5.8 and 8-micron images to enhance the visibility of the dust features. http://photojournal.jpl.nasa.gov/catalog/PIA07899
Stellar 'Incubators' Seen Cooking up Stars
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 2Figure 3Figure 4Figure 5 This image composite compares visible-light and infrared views from NASA's Spitzer Space Telescope of the glowing Trifid Nebula, a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius. Visible-light images of the Trifid taken with NASA's Hubble Space Telescope, Baltimore, Md. (inside left, figure 1) and the National Optical Astronomy Observatory, Tucson, Ariz., (outside left, figure 1) show a murky cloud lined with dark trails of dust. Data of this same region from the Institute for Radioastronomy millimeter telescope in Spain revealed four dense knots, or cores, of dust (outlined by yellow circles), which are 'incubators' for embryonic stars. Astronomers thought these cores were not yet ripe for stars, until Spitzer spotted the warmth of rapidly growing massive embryos tucked inside. These embryos are indicated with arrows in the false-color Spitzer picture (right, figure 1), taken by the telescope's infrared array camera. The same embryos cannot be seen in the visible-light pictures (left, figure 1). Spitzer found clusters of embryos in two of the cores and only single embryos in the other two. This is one of the first times that multiple embryos have been observed in individual cores at this early stage of stellar development.2009-04-16
ISS019-E-007253 (16 April 2009) --- Astronaut Michael Barratt, Expedition 19/20 flight engineer, performs Agricultural Camera (AgCam) setup and activation in the Destiny laboratory of the International Space Station. AgCam takes frequent images, in visible and infrared light, of vegetated areas on Earth, such as farmland, rangeland, grasslands, forests and wetlands in the northern Great Plains and Rocky Mountain regions of the United States. Images will be delivered directly to requesting farmers, ranchers, foresters, natural resource managers and tribal officials to help improve environmental stewardship.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
Mosad and Stream Vision For A Telerobotic, Flying Camera System
NASA Technical Reports Server (NTRS)
Mandl, William
2002-01-01
Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.
Adjustable long duration high-intensity point light source
NASA Astrophysics Data System (ADS)
Krehl, P.; Hagelweide, J. B.
1981-06-01
A new long duration high-intensity point light source with adjustable light duration and a small light spot locally stable in time has been developed. The principle involved is a stationary high-temperature plasma flow inside a partly constrained capillary of a coaxial spark gap which is viewed end on through a terminating Plexiglas window. The point light spark gap is operated via a resistor by an artificial transmission line. Using two exchangeable inductance sets in the line, two ranges of photoduration 10-130 μs and 100-600 μs can be covered. For a light spot size of 1.5 mm diameter the corresponding peak light output amounts to 5×106 and 1.6×106 candelas, respectively. Within these ranges the duration is controlled by an ignitron crowbar to extinguish the plasma. The adjustable photoduration is very useful for the application of continuous writing rotating mirror cameras, thus preventing multiple exposures. The essentially uniform exposure within the visible spectral range makes the new light source suitable for color cinematography.
2015-09-14
The night sides of Saturn and Tethys are dark places indeed. We know that shadows are darker areas than sunlit areas, and in space, with no air to scatter the light, shadows can appear almost totally black. Tethys (660 miles or 1,062 kilometers across) is just barely seen in the lower left quadrant of this image below the ring plane and has been brightened by a factor of three to increase its visibility. The wavy outline of Saturn's polar hexagon is visible at top center. This view looks toward the sunlit side of the rings from about 10 degrees above the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on Jan. 15, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was obtained at a distance of approximately 1.5 million miles (2.4 million kilometers) from Saturn. Image scale is 88 miles (141 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18333
2015-01-05
What's that bright point of light in the outer A ring? It's a star, bright enough to be visible through the ring! Quick, make a wish! This star -- seen in the lower right quadrant of the image -- was not captured by coincidence, it was part of a stellar occultation. By monitoring the brightness of stars as they pass behind the rings, scientists using this powerful observation technique can inspect detailed structures within the rings and how they vary with location. This view looks toward the sunlit side of the rings from about 44 degrees above the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 8, 2013. The view was acquired at a distance of approximately 1.1 million miles (1.8 million kilometers) from the rings and at a Sun-Rings-Spacecraft, or phase, angle of 96 degrees. Image scale is 6.8 miles (11 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18297
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
Vacuum-Compatible Wideband White Light and Laser Combiner Source System
NASA Technical Reports Server (NTRS)
Azizi, Alineza; Ryan, Daniel J.; Tang, Hong; Demers, Richard T.; Kadogawa, Hiroshi; An, Xin; Sun, George Y.
2010-01-01
For the Space Interferometry Mission (SIM) Spectrum Calibration Development Unit (SCDU) testbed, wideband white light is used to simulate starlight. The white light source mount requires extremely stable pointing accuracy (<3.2 microradians). To meet this and other needs, the laser light from a single-mode fiber was combined, through a beam splitter window with special coating from broadband wavelengths, with light from multimode fiber. Both lights were coupled to a photonic crystal fiber (PCF). In many optical systems, simulating a point star with broadband spectrum with stability of microradians for white light interferometry is a challenge. In this case, the cameras use the white light interference to balance two optical paths, and to maintain close tracking. In order to coarse align the optical paths, a laser light is sent into the system to allow tracking of fringes because a narrow band laser has a great range of interference. The design requirements forced the innovators to use a new type of optical fiber, and to take a large amount of care in aligning the input sources. The testbed required better than 1% throughput, or enough output power on the lowest spectrum to be detectable by the CCD camera (6 nW at camera). The system needed to be vacuum-compatible and to have the capability for combining a visible laser light at any time for calibration purposes. The red laser is a commercially produced 635-nm laser 5-mW diode, and the white light source is a commercially produced tungsten halogen lamp that gives a broad spectrum of about 525 to 800 nm full width at half maximum (FWHM), with about 1.4 mW of power at 630 nm. A custom-made beam splitter window with special coating for broadband wavelengths is used with the white light input via a 50-mm multi-mode fiber. The large mode area PCF is an LMA-8 made by Crystal Fibre (core diameter of 8.5 mm, mode field diameter of 6 mm, and numerical aperture at 625 nm of 0.083). Any science interferometer that needs a tracking laser fringe to assist in alignment can use this system.
Optical design and stray light analysis for the JANUS camera of the JUICE space mission
NASA Astrophysics Data System (ADS)
Greggio, D.; Magrin, D.; Munari, M.; Zusi, M.; Ragazzoni, R.; Cremonese, G.; Debei, S.; Friso, E.; Della Corte, V.; Palumbo, P.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Schmitz, N.; Schipani, P.; Lara, L. M.
2015-09-01
The JUICE (JUpiter ICy moons Explorer) satellite of the European Space Agency (ESA) is dedicated to the detailed study of Jupiter and its moons. Among the whole instrument suite, JANUS (Jovis, Amorum ac Natorum Undique Scrutator) is the camera system of JUICE designed for imaging at visible wavelengths. It will conduct an in-depth study of Ganymede, Callisto and Europa, and explore most of the Jovian system and Jupiter itself, performing, in the case of Ganymede, a global mapping of the satellite with a resolution of 400 m/px. The optical design chosen to meet the scientific goals of JANUS is a three mirror anastigmatic system in an off-axis configuration. To ensure that the achieved contrast is high enough to observe the features on the surface of the satellites, we also performed a preliminary stray light analysis of the telescope. We provide here a short description of the optical design and we present the procedure adopted to evaluate the stray-light expected during the mapping phase of the surface of Ganymede. We also use the results obtained from the first run of simulations to optimize the baffle design.
Calibration of imaging parameters for space-borne airglow photography using city light positions
NASA Astrophysics Data System (ADS)
Hozumi, Yuta; Saito, Akinori; Ejiri, Mitsumu K.
2016-09-01
A new method for calibrating imaging parameters of photographs taken from the International Space Station (ISS) is presented in this report. Airglow in the mesosphere and the F-region ionosphere was captured on the limb of the Earth with a digital single-lens reflex camera from the ISS by astronauts. To utilize the photographs as scientific data, imaging parameters, such as the angle of view, exact position, and orientation of the camera, should be determined because they are not measured at the time of imaging. A new calibration method using city light positions shown in the photographs was developed to determine these imaging parameters with high accuracy suitable for airglow study. Applying the pinhole camera model, the apparent city light positions on the photograph are matched with the actual city light locations on Earth, which are derived from the global nighttime stable light map data obtained by the Defense Meteorological Satellite Program satellite. The correct imaging parameters are determined in an iterative process by matching the apparent positions on the image with the actual city light locations. We applied this calibration method to photographs taken on August 26, 2014, and confirmed that the result is correct. The precision of the calibration was evaluated by comparing the results from six different photographs with the same imaging parameters. The precisions in determining the camera position and orientation are estimated to be ±2.2 km and ±0.08°, respectively. The 0.08° difference in the orientation yields a 2.9-km difference at a tangential point of 90 km in altitude. The airglow structures in the photographs were mapped to geographical points using the calibrated imaging parameters and compared with a simultaneous observation by the Visible and near-Infrared Spectral Imager of the Ionosphere, Mesosphere, Upper Atmosphere, and Plasmasphere mapping mission installed on the ISS. The comparison shows good agreements and supports the validity of the calibration. This calibration technique makes it possible to utilize photographs taken on low-Earth-orbit satellites in the nighttime as a reference for the airglow and aurora structures.[Figure not available: see fulltext.
Impact Site: Cassini's Final Image
2017-09-15
This monochrome view is the last image taken by the imaging cameras on NASA's Cassini spacecraft. It looks toward the planet's night side, lit by reflected light from the rings, and shows the location at which the spacecraft would enter the planet's atmosphere hours later. A natural color view, created using images taken with red, green and blue spectral filters, is also provided (Figure 1). The imaging cameras obtained this view at approximately the same time that Cassini's visual and infrared mapping spectrometer made its own observations of the impact area in the thermal infrared. This location -- the site of Cassini's atmospheric entry -- was at this time on the night side of the planet, but would rotate into daylight by the time Cassini made its final dive into Saturn's upper atmosphere, ending its remarkable 13-year exploration of Saturn. The view was acquired on Sept. 14, 2017 at 19:59 UTC (spacecraft event time). The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 394,000 miles (634,000 kilometers) from Saturn. Image scale is about 11 miles (17 kilometers). The original image has a size of 512x512 pixels. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21895
NASA Astrophysics Data System (ADS)
Hosono, Satsuki; Kawashima, Natsumi; Wollherr, Dirk; Ishimaru, Ichiro
2016-05-01
The distributed networks for information collection of chemical components with high-mobility objects, such as drones or smartphones, will work effectively for investigations, clarifications and predictions against unexpected local terrorisms and disasters like localized torrential downpours. We proposed and reported the proposed spectroscopic line-imager for smartphones in this conference. In this paper, we will mention the wide-area spectroscopic-image construction by estimating 6 DOF (Degrees Of Freedom: parallel movements=x,y,z and rotational movements=θx, θy, θz) from line data to observe and analyze surrounding chemical-environments. Recently, smartphone movies, what were photographed by peoples happened to be there, had worked effectively to analyze what kinds of phenomenon had happened around there. But when a gas tank suddenly blew up, we did not recognize from visible-light RGB-color cameras what kinds of chemical gas components were polluting surrounding atmospheres. Conventionally Fourier spectroscopy had been well known as chemical components analysis in laboratory usages. But volatile gases should be analyzed promptly at accident sites. And because the humidity absorption in near and middle infrared lights has very high sensitivity, we will be able to detect humidity in the sky from wide field spectroscopic image. And also recently, 6-DOF sensors are easily utilized for estimation of position and attitude for UAV (Unmanned Air Vehicle) or smartphone. But for observing long-distance views, accuracies of angle measurements were not sufficient to merge line data because of leverage theory. Thus, by searching corresponding pixels between line spectroscopic images, we are trying to estimate 6-DOF in high accuracy.
The First Light of the Subaru Laser Guide Star Adaptive Optics System
NASA Astrophysics Data System (ADS)
Takami, H.; Hayano, Y.; Oya, S.; Hattori, M.; Watanabe, M.; Guyon, O.; Eldred, M.; Colley, S.; Saito, Y.; Itoh, M.; Dinkins, M.
Subaru Telescope has been operating 36 element curvature sensor AO system for the Cassegrain focus since 2000. We have developed a new AO system for the Nasmyth focus. The AO system has 188 element curvature wavefront sensor and bimorph deformable mirror. It is the largest format system for this type of sensor . The deformable mirror has also 188 element with 90 mm effective aperture and 130 mm blank size. The real time controller is 4 CPU real time Linux OS computer and the update speed is now 1.5 kHz. The AO system also has laser guide star system. The laser is sum frequency solid state laser generating 589 nm light. We have achieved 4.7 W output power with excellent beam quality of M^2=1.1 and good stability. The laser is installed in a clean room on the Nasmyth platform. The laser beam is transferred by photonic crystal optical fiber with 35 m to the 50 cm laser launching telescope mounted behind the Subaru 2ry mirror. The field of view of the low order wavefront sensor for tilt guide star in LGS mode is 2.7 arcmin in diameter. The AO system had the first light with natural guide star in October 2006. The Strehl ratio was > 0.5 at K band under the 0.8 arcsec visible seeing. We also has projected laser beam on the sky during the same engineering run. Three instruments will be used with the AO system. Infrared camera and spectrograph (IRCS), High dynamic range IR camera (HiCIAO) for exosolar planet detection, and visible 3D spectrograph.
NASA Astrophysics Data System (ADS)
Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari
2006-09-01
Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.
NASA Technical Reports Server (NTRS)
2005-01-01
This view shows the unlit face of Saturn's rings, visible via scattered and transmitted light. In these views, dark regions represent gaps and areas of higher particle densities, while brighter regions are filled with less dense concentrations of ring particles. The dim right side of the image contains nearly the entire C ring. The brighter region in the middle is the inner B ring, while the darkest part represents the dense outer B Ring. The Cassini Division and the innermost part of the A ring are at the upper-left. Saturn's shadow carves a dark triangle out of the lower right corner of this image. The image was taken in visible light with the Cassini spacecraft wide-angle camera on June 8, 2005, at a distance of approximately 433,000 kilometers (269,000 miles) from Saturn. The image scale is 22 kilometers (14 miles) per pixel. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .NASA Technical Reports Server (NTRS)
2006-01-01
This false-color composite image shows the Cartwheel galaxy as seen by the Galaxy Evolution Explorer's far ultraviolet detector (blue); the Hubble Space Telescope's wide field and planetary camera 2 in B-band visible light (green); the Spitzer Space Telescope's infrared array camera at 8 microns (red); and the Chandra X-ray Observatory's advanced CCD imaging spectrometer-S array instrument (purple). Approximately 100 million years ago, a smaller galaxy plunged through the heart of Cartwheel galaxy, creating ripples of brief star formation. In this image, the first ripple appears as an ultraviolet-bright blue outer ring. The blue outer ring is so powerful in the Galaxy Evolution Explorer observations that it indicates the Cartwheel is one of the most powerful UV-emitting galaxies in the nearby universe. The blue color reveals to astronomers that associations of stars 5 to 20 times as massive as our sun are forming in this region. The clumps of pink along the outer blue ring are regions where both X-rays and ultraviolet radiation are superimposed in the image. These X-ray point sources are very likely collections of binary star systems containing a blackhole (called massive X-ray binary systems). The X-ray sources seem to cluster around optical/ultraviolet-bright supermassive star clusters. The yellow-orange inner ring and nucleus at the center of the galaxy result from the combination of visible and infrared light, which is stronger towards the center. This region of the galaxy represents the second ripple, or ring wave, created in the collision, but has much less star formation activity than the first (outer) ring wave. The wisps of red spread throughout the interior of the galaxy are organic molecules that have been illuminated by nearby low-level star formation. Meanwhile, the tints of green are less massive, older visible-light stars. Although astronomers have not identified exactly which galaxy collided with the Cartwheel, two of three candidate galaxies can be seen in this image to the bottom left of the ring, one as a neon blob and the other as a green spiral. Previously, scientists believed the ring marked the outermost edge of the galaxy, but the latest GALEX observations detect a faint disk, not visible in this image, that extends to twice the diameter of the ring.NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael
2017-01-01
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613
Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael
2017-07-04
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.
Broadly available imaging devices enable high-quality low-cost photometry.
Christodouleas, Dionysios C; Nemiroski, Alex; Kumar, Ashok A; Whitesides, George M
2015-09-15
This paper demonstrates that, for applications in resource-limited environments, expensive microplate spectrophotometers that are used in many central laboratories for parallel measurement of absorbance of samples can be replaced by photometers based on inexpensive and ubiquitous, consumer electronic devices (e.g., scanners and cell-phone cameras). Two devices, (i) a flatbed scanner operating in transmittance mode and (ii) a camera-based photometer (constructed from a cell phone camera, a planar light source, and a cardboard box), demonstrate the concept. These devices illuminate samples in microtiter plates from one side and use the RGB-based imaging sensors of the scanner/camera to measure the light transmitted to the other side. The broadband absorbance of samples (RGB-resolved absorbance) can be calculated using the RGB color values of only three pixels per microwell. Rigorous theoretical analysis establishes a well-defined relationship between the absorbance spectrum of a sample and its corresponding RGB-resolved absorbance. The linearity and precision of measurements performed with these low-cost photometers on different dyes, which absorb across the range of the visible spectrum, and chromogenic products of assays (e.g., enzymatic, ELISA) demonstrate that these low-cost photometers can be used reliably in a broad range of chemical and biochemical analyses. The ability to perform accurate measurements of absorbance on liquid samples, in parallel and at low cost, would enable testing, typically reserved for well-equipped clinics and laboratories, to be performed in circumstances where resources and expertise are limited.
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
NASA Astrophysics Data System (ADS)
Mahmood, Usama; Dehdari, Reza; Cerussi, Albert; Nguyen, Quoc; Kelley, Timothy; Tromberg, Bruce J.; Wong, Brian J.
2005-04-01
Though sinusitis is a significant health problem, it remains a challenging diagnosis for many physicians mainly because of its vague, non-specific symptomology. As such, physicians must often rely on x-rays and CT, which are not only costly but also expose the patient to ionizing radiation. As an alternative to these methods of diagnosis, our laboratory constructed a near infrared (NIR) transillumination system to image the paranasal maxillary sinuses. In contrast to the more conventional form of transillumination, which uses visible light, NIR transillumination uses light with a longer wavelength which is less attenuated by soft tissues, allowing increased signal intensity and tissue penetration. Our NIR transillumination system is low-cost, consisting of a light source containing two series of light emitting diodes, which give off light at wavelengths of 810 nm and 850 nm, and a charge coupled device (CCD) camera sensitive to NIR light. The light source is simply placed in the patient"s mouth and the resultant image created by the transmittance of NIR light is captured with the CCD camera via notebook PC. Using this NIR transillumination system, we imaged the paranasal maxillary sinuses of both healthy patients (n=5) and patients with sinus disease (n=12) and compared the resultant findings with conventional CT scans. We found that air and fluid/tissue-filled spaces can be reasonably distinguished by their differing NIR opacities. Based on these findings, we believe NIR transillumination of the paranasal sinuses may provide a simple, safe, and cost effective modality in the diagnosis and management of sinus disease.
NASA Technical Reports Server (NTRS)
Andrews, Jane C.; Knowlton, Kelly
2007-01-01
Light pollution has significant adverse biological effects on humans, animals, and plants and has resulted in the loss of our ability to view the stars and planets of the universe. Over half of the U.S. population resides in coastal regions where it is no longer possible to see the stars and planets in the night sky. Forty percent of the entire U.S. population is never exposed to conditions dark enough for their eyes to convert to night vision capabilities. In coastal regions, urban lights shine far out to sea where they are augmented by the output from fishing boat, cruise ship and oil platform floodlights. The proposed candidate solution suggests using HSCs (high sensitivity cameras) onboard the SAC-C and Aquarius/SAC-D satellites to quantitatively evaluate light pollution at high spatial resolution. New products modeled after pre-existing, radiance-calibrated, global nighttime lights products would be integrated into a modified Garstang model where elevation, mountain screening, Rayleigh scattering, Mie scattering by aerosols, and atmospheric extinction along light paths and curvature of the Earth would be taken into account. Because the spatial resolution of the HSCs on SAC-C and the future Aquarius/SAC-D missions is greater than that provided by the DMSP (Defense Meteorological Satellite Program) OLS (Operational Linescan System) or VIIRS (Visible/Infrared Imager/Radiometer Suite), it may be possible to obtain more precise light intensity data for analytical DSSs and the subsequent reduction in coastal light pollution.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
A visible light imaging device for cardiac rate detection with reduced effect of body movement
NASA Astrophysics Data System (ADS)
Jiang, Xiaotian; Liu, Ming; Zhao, Yuejin
2014-09-01
A visible light imaging system to detect human cardiac rate is proposed in this paper. A color camera and several LEDs, acting as lighting source, were used to avoid the interference of ambient light. From people's forehead, the cardiac rate could be acquired based on photoplethysmography (PPG) theory. The template matching method was used after the capture of video. The video signal was discomposed into three signal channels (RGB) and the region of interest was chosen to take the average gray value. The green channel signal could provide an excellent waveform of pulse wave on the account of green lights' absorptive characteristics of blood. Through the fast Fourier transform, the cardiac rate was exactly achieved. But the research goal was not just to achieve the cardiac rate accurately. With the template matching method, the effects of body movement are reduced to a large extent, therefore the pulse wave can be detected even while people are in the moving state and the waveform is largely optimized. Several experiments are conducted on volunteers, and the results are compared with the ones gained by a finger clamped pulse oximeter. The contrast results between these two ways are exactly agreeable. This method to detect the cardiac rate and the pulse wave largely reduces the effects of body movement and can probably be widely used in the future.
Confocal non-line-of-sight imaging based on the light-cone transform.
O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon
2018-03-15
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
The Near-Earth Object Camera: A Next-Generation Minor Planet Survey
NASA Astrophysics Data System (ADS)
Mainzer, Amy K.; Wright, Edward L.; Bauer, James; Grav, Tommy; Cutri, Roc M.; Masiero, Joseph; Nugent, Carolyn R.
2015-11-01
The Near-Earth Object Camera (NEOCam) is a next-generation asteroid and comet survey designed to discover, characterize, and track large numbers of minor planets using a 50 cm infrared telescope located at the Sun-Earth L1 Lagrange point. Proposed to NASA's Discovery program, NEOCam is designed to carry out a comprehensive inventory of the small bodies in the inner regions of our solar system. It address three themes: 1) quantify the potential hazard that near-Earth objects may pose to Earth; 2) study the origins and evolution of our solar system as revealed by its small body populations; and 3) identify the best destinations for future robotic and human exploration. With a dual channel infrared imager that observes at 4-5 and 6-10 micron bands simultaneously through the use of a beamsplitter, NEOCam enables measurements of asteroid diameters and thermal inertia. NEOCam complements existing and planned visible light surveys in terms of orbital element phase space and wavelengths, since albedos can be determined for objects with both visible and infrared flux measurements. NEOCam was awarded technology development funding in 2011 to mature the necessary megapixel infrared detectors.
NASA Technical Reports Server (NTRS)
2005-01-01
Saturn poses with Tethys in this Cassini view. The C ring casts thin, string-like shadows on the northern hemisphere. Above that lurks the shadow of the much denser B ring. Cloud bands in the atmosphere are subtly visible in the south. Tethys is 1,071 kilometers (665 miles) across. Cassini will perform a close flyby of Tethys on September 24, 2005. The image was taken on June 10, 2005, in visible green light with the Cassini spacecraft wide-angle camera at a distance of approximately 1.4 million kilometers (900,000 miles) from Saturn. The image scale is 81 kilometers (50 miles) per pixel. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Design and Preliminary Testing of the International Docking Adapter's Peripheral Docking Target
NASA Technical Reports Server (NTRS)
Foster, Christopher W.; Blaschak, Johnathan; Eldridge, Erin A.; Brazzel, Jack P.; Spehar, Peter T.
2015-01-01
The International Docking Adapter's Peripheral Docking Target (PDT) was designed to allow a docking spacecraft to judge its alignment relative to the docking system. The PDT was designed to be compatible with relative sensors using visible cameras, thermal imagers, or Light Detection and Ranging (LIDAR) technologies. The conceptual design team tested prototype designs and materials to determine the contrast requirements for the features. This paper will discuss the design of the PDT, the methodology and results of the tests, and the conclusions pertaining to PDT design that were drawn from testing.
2017-09-15
This view of Saturn's A ring features a lone "propeller" -- one of many such features created by small moonlets embedded in the rings as they attempt, unsuccessfully, to open gaps in the ring material. The image was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back to Earth. The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 420,000 miles (676,000 kilometers) from Saturn. Image scale is 2.3 miles (3.7 kilometers). https://photojournal.jpl.nasa.gov/catalog/PIA21894
2018-02-05
In this view, Saturn's icy moon Rhea passes in front of Titan as seen by NASA's Cassini spacecraft. Some of the differences between the two large moons are readily apparent. While Rhea is a heavily-cratered, airless world, Titan's nitrogen-rich atmosphere is even thicker than Earth's. This natural color image was taken in visible light with the Cassini narrow-angle camera on Nov. 19, 2009, at a distance of approximately 713,300 miles (1,148,000 kilometers) from Rhea. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21904
Development of high energy micro-tomography system at SPring-8
NASA Astrophysics Data System (ADS)
Uesugi, Kentaro; Hoshino, Masato
2017-09-01
A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.
2016-10-03
Two tiny moons of Saturn, almost lost amid the planet's enormous rings, are seen orbiting in this image. Pan, visible within the Encke Gap near lower-right, is in the process of overtaking the slower Atlas, visible at upper-left. All orbiting bodies, large and small, follow the same basic rules. In this case, Pan (17 miles or 28 kilometers across) orbits closer to Saturn than Atlas (19 miles or 30 kilometers across). According to the rules of planetary motion deduced by Johannes Kepler over 400 years ago, Pan orbits the planet faster than Atlas does. This view looks toward the sunlit side of the rings from about 39 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 9, 2016. The view was acquired at a distance of approximately 3.4 million miles (5.5 million kilometers) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 71 degrees. Image scale is 21 miles (33 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20501
Messy interviews: changing conditions for politicians' visibility on the web.
Kroon, Åsa; Eriksson, Göran
2016-10-01
This article provides an updated analysis relating to John B. Thompson's argument about political visibility and fragility. It does so in light of recent years' development of communication technologies and the proliferation of nonbroadcasting media organizations producing TV. Instances of a new mediated encounter for politicians is analyzed in detail - the live web interview - produced and streamed by two Swedish tabloids during election campaigning 2014. It is argued that the live web interview is not yet a recognizable 'communicative activity type' with an obvious set of norms, rules, and routines. This fact makes politicians more intensely exposed to moments of mediated fragility which may be difficult to control. The most crucial condition that changes how politicians are able to manage their visibility is the constantly rolling 'non-exclusive' live camera which does not give the politician any room for error. The tabloids do not seem to mind 'things going a bit wrong' while airing; rather, interactional flaws are argued to be part and parcel of the overall web TV performance.
Effects of red light camera enforcement on fatal crashes in large U.S. cities.
Hu, Wen; McCartt, Anne T; Teoh, Eric R
2011-08-01
To estimate the effects of red light camera enforcement on per capita fatal crash rates at intersections with signal lights. From the 99 large U.S. cities with more than 200,000 residents in 2008, 14 cities were identified with red light camera enforcement programs for all of 2004-2008 but not at any time during 1992-1996, and 48 cities were identified without camera programs during either period. Analyses compared the citywide per capita rate of fatal red light running crashes and the citywide per capita rate of all fatal crashes at signalized intersections during the two study periods, and rate changes then were compared for cities with and without cameras programs. Poisson regression was used to model crash rates as a function of red light camera enforcement, land area, and population density. The average annual rate of fatal red light running crashes declined for both study groups, but the decline was larger for cities with red light camera enforcement programs than for cities without camera programs (35% vs. 14%). The average annual rate of all fatal crashes at signalized intersections decreased by 14% for cities with camera programs and increased slightly (2%) for cities without cameras. After controlling for population density and land area, the rate of fatal red light running crashes during 2004-2008 for cities with camera programs was an estimated 24% lower than what would have been expected without cameras. The rate of all fatal crashes at signalized intersections during 2004-2008 for cities with camera programs was an estimated 17% lower than what would have been expected without cameras. Red light camera enforcement programs were associated with a statistically significant reduction in the citywide rate of fatal red light running crashes and a smaller but still significant reduction in the rate of all fatal crashes at signalized intersections. The study adds to the large body of evidence that red light camera enforcement can prevent the most serious crashes. Communities seeking to reduce crashes at intersections should consider this evidence. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kang, Han Gyu; Lee, Ho-Young; Kim, Kyeong Min; Song, Seong-Hyun; Hong, Gun Chul; Hong, Seong Jong
2017-01-01
The aim of this study is to integrate NIR, gamma, and visible imaging tools into a single endoscopic system to overcome the limitation of NIR using gamma imaging and to demonstrate the feasibility of endoscopic NIR/gamma/visible fusion imaging for sentinel lymph node (SLN) mapping with a small animal. The endoscopic NIR/gamma/visible imaging system consists of a tungsten pinhole collimator, a plastic focusing lens, a BGO crystal (11 × 11 × 2 mm 3 ), a fiber-optic taper (front = 11 × 11 mm 2 , end = 4 × 4 mm 2 ), a 122-cm long endoscopic fiber bundle, an NIR emission filter, a relay lens, and a CCD camera. A custom-made Derenzo-like phantom filled with a mixture of 99m Tc and indocyanine green (ICG) was used to assess the spatial resolution of the NIR and gamma images. The ICG fluorophore was excited using a light-emitting diode (LED) with an excitation filter (723-758 nm), and the emitted fluorescence photons were detected with an emission filter (780-820 nm) for a duration of 100 ms. Subsequently, the 99m Tc distribution in the phantom was imaged for 3 min. The feasibility of in vivo SLN mapping with a mouse was investigated by injecting a mixture of 99m Tc-antimony sulfur colloid (12 MBq) and ICG (0.1 mL) into the right paw of the mouse (C57/B6) subcutaneously. After one hour, NIR, gamma, and visible images were acquired sequentially. Subsequently, the dissected SLN was imaged in the same way as the in vivo SLN mapping. The NIR, gamma, and visible images of the Derenzo-like phantom can be obtained with the proposed endoscopic imaging system. The NIR/gamma/visible fusion image of the SLN showed a good correlation among the NIR, gamma, and visible images both for the in vivo and ex vivo imaging. We demonstrated the feasibility of the integrated NIR/gamma/visible imaging system using a single endoscopic fiber bundle. In future, we plan to investigate miniaturization of the endoscope head and simultaneous NIR/gamma/visible imaging with dichroic mirrors and three CCD cameras. © 2016 American Association of Physicists in Medicine.
Solar corona/prominence seen through the White Light Coronograph
NASA Technical Reports Server (NTRS)
1974-01-01
The solar corona and a solar prominence as seen through the White Light Coronograph, Skylab Experiment S052, on January 17, 1974. This view was reproduced from a television transmission made by a TV camera aboard the Skylab space station in Earth orbit. The bright spot is a burn in the vidicon. The solar corona is the halo around the Sun which is normally visible only at the time of solar eclipse by the Moon. The Skylab coronography uses an externally-mounted disk system which occults the brilliant solar surface while allowing the fainter radiation of the corona to enter an annulus and be photographed. A mirror system allows either TV viewing of the corona or photographic recording of the image.
September 2006 Monthly Report- ITER Visible/IRTV Optical Design Scoping Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lasnier, C
LLNL received a request from the US ITER organization to perform a scoping study of optical design for visible/IR camera systems for the 6 upper ports of ITER. A contract was put in place and the LLNL account number was opened July 19, 2006. A kickoff meeting was held at LLNL July 26. The principal work under the contract is being performed by Lynn Seppala (optical designer), Kevin Morris (mechanical designer), Max Fenstermacher (visible cameras), Mathias Groth (assisting with visible cameras), and Charles Lasnier (IR cameras and Principal Investigator), all LLNL employees. Kevin Morris has imported ITER CAD files andmore » developed a simplified 3D view of the ITER tokamak with upper ports, which he used to determine the optimum viewing angle from an upper port to see the outer target. He also determined the minimum angular field of view needed to see the largest possible coverage of the outer target. We examined the CEA-Cadarache report on their optical design for ITER visible/IRTV equatorial ports. We found that the resolution was diffraction-limited by the 5-mm aperture through the tile. Lynn Seppala developed a similar front-end design for an upper port but with a larger 6-inch-diameter beam. This allows the beam to pass through the port plug and port interspace without further focusing optics until outside the bioshield. This simplifies the design as well as eliminating a requirement for complex relay lenses in the port interspace. The focusing optics are all mirrors, which allows the system to handle light from 0.4 {micro}m to 5 {micro}m wavelength without chromatic aberration. The window material chosen is sapphire, as in the CEA design. Sapphire has good transmission in the desired wavelengths up to 4.8 {micro}m, as well as good mechanical strength. We have verified that sapphire windows of the needed size are commercially available. The diffraction-limited resolution permitted by the 5 mm aperture falls short of the ITER specification value but is well-matched to the resolution of current detectors. A large increase in resolution would require a similar increase in the linear pixel count on a detector. However, we cannot increase the aperture much without affecting the image quality. Lynn Seppala is writing a memo detailing the resolution trade-offs. Charles Lasnier is calculating the radiated power, which will fall on the detector in order to estimate signal-to-noise ratio and maximum frame rate. The signal will be reduced by the fact that the outer target plates are tungsten, which radiates less than carbon at the same temperature. The tungsten will also reflect radiation from the carbon tiles private flux dome, which will radiate efficiently although at a lower temperature than the target plates. The analysis will include estimates of these effects. Max Fenstermacher is investigating the intensity of line emission that will be emitted in the visible band, in order to predict signal-to-noise ratio and maximum frame rate for the visible camera. Andre Kukushkin has modeling results that will give local emission of deuterium and carbon lines. Line integrals of the emission must be done to produce the emitted intensity. The model is not able to handle tungsten and beryllium so we will only be able to estimate deuterium and carbon emission. Total costs as of September 30, 2006 are $87,834.43. Manpower was 0.58 FTE's in August, 1.48 in August, and 1.56 in September.« less
Perfect Lighting for Facial Photography in Aesthetic Surgery: Ring Light.
Dölen, Utku Can; Çınar, Selçuk
2016-04-01
Photography is indispensable for plastic surgery. On-camera flashes can result in bleached out detail and colour. This is why most of the plastic surgery clinics prefer studio lighting similar to professional photographers'. In this article, we want to share a simple alternative to studio lighting that does not need extra space: Ring light. We took five different photographs of the same person with five different camera and lighting settings: Smartphone and ring light; point and shoot camera and on-camera flash; point and shoot camera and studio lighting; digital single-lens reflex (DLSR) camera and studio lighting; DSLR and ring light. Then, those photographs were assessed objectively with an online survey of five questions answered by three distinct populations: plastic surgeons (n: 28), professional portrait photographers (n: 24) and patients (n: 22) who had facial aesthetic procedures. Compared to the on-camera flash, studio lighting better showed the wrinkles of the subject. The ring light facilitated the perception of the wrinkles by providing homogenous soft light in a circular shape rather than bursting flashes. The combination of a DSLR camera and ring light gave the oldest looking subject according to 64 % of responders. The DSLR camera and the studio lighting demonstrated the youngest looking subject according to 70 % of the responders. The majority of the responders (78 %) chose the combination of DSLR camera and ring light that exhibited the wrinkles the most. We suggest using a ring light to obtain well-lit photographs without loss of detail, with any type of cameras. However, smartphones must be avoided if standard pictures are desired. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Hubble Tracks Clouds on Uranus
NASA Technical Reports Server (NTRS)
1997-01-01
Taking its first peek at Uranus, NASA Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has detected six distinct clouds in images taken July 28,1997.
The image on the right, taken 90 minutes after the left-hand image, shows the planet's rotation. Each image is a composite of three near-infrared images. They are called false-color images because the human eye cannot detect infrared light. Therefore, colors corresponding to visible light were assigned to the images. (The wavelengths for the 'blue,' 'green,' and 'red' exposures are 1.1, 1.6, and 1.9 micrometers, respectively.)At visible and near-infrared light, sunlight is reflected from hazes and clouds in the atmosphere of Uranus. However, at near-infrared light, absorption by gases in the Uranian atmosphere limits the view to different altitudes, causing intense contrasts and colors.In these images, the blue exposure probes the deepest atmospheric levels. A blue color indicates clear atmospheric conditions, prevalent at mid-latitudes near the center of the disk. The green exposure is sensitive to absorption by methane gas, indicating a clear atmosphere; but in hazy atmospheric regions, the green color is seen because sunlight is reflected back before it is absorbed. The green color around the south pole (marked by '+') shows a strong local haze. The red exposure reveals absorption by hydrogen, the most abundant gas in the atmosphere of Uranus. Most sunlight shows patches of haze high in the atmosphere. A red color near the limb (edge) of the disk indicates the presence of a high-altitude haze. The purple color to the right of the equator also suggests haze high in the atmosphere with a clear atmosphere below.The five clouds visible near the right limb rotated counterclockwise during the time between both images. They reach high into the atmosphere, as indicated by their red color. Features of such high contrast have never been seen before on Uranus. The clouds are almost as large as continents on Earth, such as Europe. Another cloud (which barely can be seen) rotated along the path shown by the black arrow. It is located at lower altitudes, as indicated by its green color.The rings of Uranus are extremely faint in visible light but quite prominent in the near infrared. The brightest ring, the epsilon ring, has a variable width around its circumference. Its widest and thus brightest part is at the top in this image. Two fainter, inner rings are visible next to the epsilon ring.Eight of the 10 small Uranian satellites, discovered by Voyager 2, can be seen in both images. Their sizes range from about 25 miles (40 kilometers) for Bianca to 100 miles (150 kilometers) for Puck. The smallest of these satellites have not been detected since the departure of Voyager 2 from Uranus in 1986. These eight satellites revolve around Uranus in less than a day. The inner ones are faster than the outer ones. Their motion in the 90 minutes between both images is marked in the right panel. The area outside the rings was slightly enhanced in brightness to improve the visibility of these faint satellites.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/Imaging camera system of OYGBR-phosphor-based white LED lighting
NASA Astrophysics Data System (ADS)
Kobashi, Katsuya; Taguchi, Tsunemasa
2005-03-01
The near-ultraviolet (nUV) white LED approach is analogous to three-color fluorescent lamp technology, which is based on the conversion of nUV radiation to visible light via the photoluminescence process in phosphor materials. The nUV light is not included in the white light generation from nUV-based white LED devices. This technology can thus provide a higher quality of white light than the blue and YAG method. A typical device demonstrates white luminescence with Tc=3,700 K, Ra > 93, K > 40 lm/W and chromaticity (x, y) = (0.39, 0.39), respectively. The orange, yellow, green and blue OYGB) or orange, yellow, red, green and blue (OYRGB) device shows a luminescence spectrum broader than of an RGB white LED and a better color rendering index. Such superior luminous characteristics could be useful for the application of several kinds of endoscope. We have shown the excellent pictures of digestive organs in a stomach of a dog due to the strong green component and high Ra.
Large-aperture ground glass surface profile measurement using coherence scanning interferometry.
Bae, Eundeok; Kim, Yunseok; Park, Sanguk; Kim, Seung-Woo
2017-01-23
We present a coherence scanning interferometer configured to deal with rough glass surfaces exhibiting very low reflectance due to severe sub-surface light scattering. A compound light source is prepared by combining a superluminescent light-emitting diode with an ytterbium-doped fiber amplifier. The light source is attuned to offer a short temporal coherence length of 15 μm but with high spatial coherence to secure an adequate correlogram contrast by delivering strongly unbalanced optical power to the low reflectance target. In addition, the infrared spectral range of the light source is shifted close to the visible side at a 1,038 nm center wavelength, so a digital camera of multi-mega pixels available for industrial machine vision can be used to improve the correlogram contrast further with better lateral image resolutions. Experimental results obtained from a ground Zerodur mirror of 200 mm aperture size and 0.9 μm rms roughness are discussed to validate the proposed interferometer system.
Enhanced optical discrimination system based on switchable retroreflective films
NASA Astrophysics Data System (ADS)
Schultz, Phillip; Heikenfeld, Jason
2016-04-01
Reported herein is the design, characterization, and demonstration of a laser interrogation and response optical discrimination system based on large-area corner-cube retroreflective films. The switchable retroreflective films use light-scattering liquid crystal to modulate retroreflected intensity. The system can operate with multiple wavelengths (visible to infrared) and includes variable divergence optics for irradiance adjustments and ease of system alignment. The electronic receiver and switchable retroreflector offer low-power operation (<4 mW standby) on coin cell batteries with rapid interrogation to retroreflected signal reception response times (<15 ms). The entire switchable retroreflector film is <1 mm thick and is flexible for optimal placement and increased angular response. The system was demonstrated in high ambient lighting conditions (daylight, 18k lux) with a visible 10-mW output 635-nm source out to a distance of 400 m (naked eye detection). Nighttime demonstrations were performed using a 1.5-mW, 850-nm infrared laser diode out to a distance of 400 m using a night vision camera. This system could have tagging and conspicuity applications in commercial or military settings.
NASA Technical Reports Server (NTRS)
2005-01-01
These views, taken two hours apart, demonstrate the dramatic variability in the structure of Saturn's intriguing F ring. In the image at the left, ringlets in the F ring and Encke Gap display distinctive kinks, and there is a bright patch of material on the F ring's inner edge. Saturn's moon Janus (181 kilometers, or 113 miles across) is shown here, partly illuminated by reflected light from the planet. At the right, Prometheus (102 kilometers, or 63 miles across) orbits ahead of the radial striations in the F ring, called 'drapes' by scientists. The drapes appear to be caused by successive passes of Prometheus as it reaches the greatest distance (apoapse) in its orbit of Saturn. Also in this image, the outermost ringlet visible in the Encke Gap displays distinctive bright patches. These views were obtained from about three degrees below the ring plane. The images were taken in visible light with the Cassini spacecraft narrow-angle camera on June 29, 2005, when Cassini was about 1.5 million kilometers (900,000 miles) from Saturn. The image scale is about 9 kilometers (6 miles) per pixel.2017-12-08
Spiral galaxy NGC 3274 is a relatively faint galaxy located over 20 million light-years away in the constellation of Leo (The Lion). This NASA/ESA Hubble Space Telescope image comes courtesy of Hubble's Wide Field Camera 3 (WFC3), whose multi-color vision allows astronomers to study a wide range of targets, from nearby star formation to galaxies in the most remote regions of the cosmos. This image combines observations gathered in five different filters, bringing together ultraviolet, visible and infrared light to show off NGC 3274 in all its glory. NGC 3274 was discovered by Wilhelm Herschel in 1783. The galaxy PGC 213714 is also visible on the upper right of the frame, located much farther away from Earth. Image Credit: ESA/Hubble & NASA, D. Calzetti NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Tan, Tai Ho; Williams, Arthur H.
1985-01-01
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasmas generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Tan, T.H.; Williams, A.H.
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasma generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
2017-12-08
This picture, taken by Hubble’s Advanced Camera for Surveys, shows NGC 4696, the largest galaxy in the Centaurus Cluster. (To see a video of NGC 4696 go here: www.flickr.com/photos/gsfc/4888176841/) The huge dust lane, around 30 000 light-years across, that sweeps across the face of the galaxy makes NGC 4696 look different from most other elliptical galaxies. Viewed at certain wavelengths, strange thin filaments of ionised hydrogen are visible within it. In this picture, these structures are visible as a subtle marbling effect across the galaxy’s bright centre. Credit: ESA/Hubble and NASA NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook
Low-cost thermo-electric infrared FPAs and their automotive applications
NASA Astrophysics Data System (ADS)
Hirota, Masaki; Ohta, Yoshimi; Fukuyama, Yasuhiro
2008-04-01
This paper describes three low-cost infrared focal plane arrays (FPAs) having a 1,536, 2,304, and 10,800 elements and experimental vehicle systems. They have a low-cost potential because each element consists of p-n polysilicon thermocouples, which allows the use of low-cost ultra-fine microfabrication technology commonly employed in the conventional semiconductor manufacturing processes. To increase the responsivity of FPA, we have developed a precisely patterned Au-black absorber that has high infrared absorptivity of more than 90%. The FPA having a 2,304 elements achieved high resposivity of 4,300 V/W. In order to reduce package cost, we developed a vacuum-sealed package integrated with a molded ZnS lens. The camera aiming the temperature measurement of a passenger cabin is compact and light weight devices that measures 45 x 45 x 30 mm and weighs 190 g. The camera achieves a noise equivalent temperature deviation (NETD) of less than 0.7°C from 0 to 40°C. In this paper, we also present a several experimental systems that use infrared cameras. One experimental system is a blind spot pedestrian warning system that employs four infrared cameras. It can detect the infrared radiation emitted from a human body and alerts the driver when a pedestrian is in a blind spot. The system can also prevent the vehicle from moving in the direction of the pedestrian. Another system uses a visible-light camera and infrared sensors to detect the presence of a pedestrian in a rear blind spot and alerts the driver. The third system is a new type of human-machine interface system that enables the driver to control the car's audio system without letting go of the steering wheel. Uncooled infrared cameras are still costly, which limits their automotive use to high-end luxury cars at present. To promote widespread use of IR imaging sensors on vehicles, we need to reduce their cost further.
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
NASA Astrophysics Data System (ADS)
Stasicki, Bolesław; Schröder, Andreas; Boden, Fritz; Ludwikowski, Krzysztof
2017-06-01
The rapid progress of light emitting diode (LED) technology has recently resulted in the availability of high power devices with unprecedented light emission intensities comparable to those of visible laser light sources. On this basis two versatile devices have been developed, constructed and tested. The first one is a high-power, single-LED illuminator equipped with exchangeable projection lenses providing a homogenous light spot of defined diameter. The second device is a multi-LED illuminator array consisting of a number of high-power LEDs, each integrated with a separate collimating lens. These devices can emit R, G, CG, B, UV or white light and can be operated in pulsed or continuous wave (CW) mode. Using an external trigger signal they can be easily synchronized with cameras or other devices. The mode of operation and all parameters can be controlled by software. Various experiments have shown that these devices have become a versatile and competitive alternative to laser and xenon lamp based light sources. The principle, design, achieved performances and application examples are given in this paper.
Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery
NASA Astrophysics Data System (ADS)
Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.
2017-12-01
Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.
Design of an ROV-based lidar for seafloor monitoring
NASA Astrophysics Data System (ADS)
Harsdorf, Stefan; Janssen, Manfred; Reuter, Rainer; Wachowicz, Bernhard
1997-05-01
In recent years, accidents of ships with chemical cargo have led to strong impacts on the marine ecosystem, and to risks for pollution control and clean-up teams. In order to enable a fast, safe, and efficient reaction, a new optical instrument has been designed for the inspection of objects on the seafloor by range-gated scattered light images as well as for the detection of substances by measuring the laser induced emission on the seafloor and within the water column. This new lidar is operated as a payload of a remotely operated vehicle (ROV). A Nd:YAG laser is employed as the light source of the lidar. In the video mode, the submarine lidar system uses the 2nd harmonic laser pulse to illuminate the seafloor. Elastically scattered and reflected light is collected with a gateable intensified CCD camera. The beam divergence of the laser is the same as the camera field-of-view. Synchronization of laser emission and camera gate time allows to suppress backscattered light from the water column and to record only the light backscattered by the object. This results in a contrast enhanced video image which increases the visibility range in turbid water up to four times. Substances seeping out from a container are often invisible in video images because of their low contrast. Therefore, a fluorescence lidar mode is integrated into the submarine lidar. the 3rd harmonic Nd:YAG laser pulse is applied, and the emission response of the water body between ROV and seafloor and of the seafloor itself is recorded at variable wavelengths with a maximum depth resolution is realized by a 2D scanner, which allows to select targets within the range-gated image for a measurement of fluorescence. The analysis of the time- and spectral-resolved signals permits the detection, the exact location, and a classification of fluorescent and/or absorbing substances.
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Calibration Target for Curiosity Arm Camera
2012-09-10
This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.
Is "good enough" good enough for portable visible and near-visible spectrometry?
NASA Astrophysics Data System (ADS)
Scheeline, Alexander
2015-06-01
Some uses of portable spectrometers require the same quality as laboratory instruments. Such quality is challenging because of temperature and humidity variation, dust, and vibration. Typically, one chooses materials and mechanical layout to minimize the influence of these noise and background sources. Mechanical stability is constrained by limits on instrument mass and ergonomics. An alternative approach is to make minimally adequate hardware, compensating for variability in software. We describe an instrument developed specifically to use software to compensate for marginal hardware. An initial instantiation of the instrument is limited to 430 - 700 nm. Simple changes will allow expansion to cover 315 - 1000 nm. Outside this range, costs are likely to increase significantly. Inherent wavelength calibration comes from knowing the peak emission wavelength of an LED light source, and fitting of instrument dispersion to a model of order placement with each measurement. Dynamic range is determined by the product of camera response and intentionally wide throughput variation among hundreds of diffraction orders. Resolution degrades gracefully at low light levels, but is limited to ~ 2 nm at high light levels as initially fabricated and ~ 1 nm in principle. Stray light may be measured in real-time. Diffuse stray light can be employed for turbidimetry fluorimetry, and to aid compensation of working curve nonlinearity. While unsuitable for, Raman spectroscopy, the instrument shows promise for absorption, fluorescence, reflectance, and surface plasmon resonance spectrometries. To aid non-expert users, real-time training, measurement sequencing, and outcome interpretation are programmed with QR codes or web-linked instructions.
High resolution Cerenkov light imaging of induced positron distribution in proton therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Fujii, Kento; Morishita, Yuki
2014-11-01
Purpose: In proton therapy, imaging of the positron distribution produced by fragmentation during or soon after proton irradiation is a useful method to monitor the proton range. Although positron emission tomography (PET) is typically used for this imaging, its spatial resolution is limited. Cerenkov light imaging is a new molecular imaging technology that detects the visible photons that are produced from high-speed electrons using a high sensitivity optical camera. Because its inherent spatial resolution is much higher than PET, the authors can measure more precise information of the proton-induced positron distribution with Cerenkov light imaging technology. For this purpose, theymore » conducted Cerenkov light imaging of induced positron distribution in proton therapy. Methods: First, the authors evaluated the spatial resolution of our Cerenkov light imaging system with a {sup 22}Na point source for the actual imaging setup. Then the transparent acrylic phantoms (100 × 100 × 100 mm{sup 3}) were irradiated with two different proton energies using a spot scanning proton therapy system. Cerenkov light imaging of each phantom was conducted using a high sensitivity electron multiplied charge coupled device (EM-CCD) camera. Results: The Cerenkov light’s spatial resolution for the setup was 0.76 ± 0.6 mm FWHM. They obtained high resolution Cerenkov light images of the positron distributions in the phantoms for two different proton energies and made fused images of the reference images and the Cerenkov light images. The depths of the positron distribution in the phantoms from the Cerenkov light images were almost identical to the simulation results. The decay curves derived from the region-of-interests (ROIs) set on the Cerenkov light images revealed that Cerenkov light images can be used for estimating the half-life of the radionuclide components of positrons. Conclusions: High resolution Cerenkov light imaging of proton-induced positron distribution was possible. The authors conclude that Cerenkov light imaging of proton-induced positron is promising for proton therapy.« less
Overview of diagnostic implementation on Proto-MPEX at ORNL
NASA Astrophysics Data System (ADS)
Biewer, T. M.; Bigelow, T.; Caughman, J. B. O.; Fehling, D.; Goulding, R. H.; Gray, T. K.; Isler, R. C.; Martin, E. H.; Meitner, S.; Rapp, J.; Unterberg, E. A.; Dhaliwal, R. S.; Donovan, D.; Kafle, N.; Ray, H.; Shaw, G. C.; Showers, M.; Mosby, R.; Skeen, C.
2015-11-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) recently began operating with an expanded diagnostic set. Approximately 100 sightlines have been established, delivering the plasma light emission to a ``patch panel'' in the diagnostic room for distribution to a variety of instruments: narrow-band filter spectroscopy, Doppler spectroscopy, laser induced breakdown spectroscopy, optical emission spectroscopy, and Thomson scattering. Additional diagnostic systems include: IR camera imaging, in-vessel thermocouples, ex-vessel fluoroptic probes, fast pressure gauges, visible camera imaging, microwave interferometry, a retarding-field energy analyzer, rf-compensated and ``double'' Langmuir probes, and B-dot probes. A data collection and archival system has been initiated using the MDSplus format. This effort capitalizes on a combination of new and legacy diagnostic hardware at ORNL and was accomplished largely through student labor. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Review of oil spill remote sensing.
Fingas, Merv; Brown, Carl
2014-06-15
Remote-sensing for oil spills is reviewed. The use of visible techniques is ubiquitous, however it gives only the same results as visual monitoring. Oil has no particular spectral features that would allow for identification among the many possible background interferences. Cameras are only useful to provide documentation. In daytime oil absorbs light and remits this as thermal energy at temperatures 3-8K above ambient, this is detectable by infrared (IR) cameras. Laser fluorosensors are useful instruments because of their unique capability to identify oil on backgrounds that include water, soil, weeds, ice and snow. They are the only sensor that can positively discriminate oil on most backgrounds. Radar detects oil on water by the fact that oil will dampen water-surface capillary waves under low to moderate wave/wind conditions. Radar offers the only potential for large area searches, day/night and foul weather remote sensing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Galaxies Gather at Great Distances
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Distant Galaxy Cluster Infrared Survey Poster [figure removed for brevity, see original site] [figure removed for brevity, see original site] Bird's Eye View Mosaic Bird's Eye View Mosaic with Clusters [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] 9.1 Billion Light-Years 8.7 Billion Light-Years 8.6 Billion Light-Years Astronomers have discovered nearly 300 galaxy clusters and groups, including almost 100 located 8 to 10 billion light-years away, using the space-based Spitzer Space Telescope and the ground-based Mayall 4-meter telescope at Kitt Peak National Observatory in Tucson, Ariz. The new sample represents a six-fold increase in the number of known galaxy clusters and groups at such extreme distances, and will allow astronomers to systematically study massive galaxies two-thirds of the way back to the Big Bang. A mosaic portraying a bird's eye view of the field in which the distant clusters were found is shown at upper left. It spans a region of sky 40 times larger than that covered by the full moon as seen from Earth. Thousands of individual images from Spitzer's infrared array camera instrument were stitched together to create this mosaic. The distant clusters are marked with orange dots. Close-up images of three of the distant galaxy clusters are shown in the adjoining panels. The clusters appear as a concentration of red dots near the center of each image. These images reveal the galaxies as they were over 8 billion years ago, since that's how long their light took to reach Earth and Spitzer's infrared eyes. These pictures are false-color composites, combining ground-based optical images captured by the Mosaic-I camera on the Mayall 4-meter telescope at Kitt Peak, with infrared pictures taken by Spitzer's infrared array camera. Blue and green represent visible light at wavelengths of 0.4 microns and 0.8 microns, respectively, while red indicates infrared light at 4.5 microns. Kitt Peak National Observatory is part of the National Optical Astronomy Observatory in Tuscon, Ariz.Enhanced Early View of Ceres from Dawn
2014-12-05
As the Dawn spacecraft flies through space toward the dwarf planet Ceres, the unexplored world appears to its camera as a bright light in the distance, full of possibility for scientific discovery. This view was acquired as part of a final calibration of the science camera before Dawn's arrival at Ceres. To accomplish this, the camera needed to take pictures of a target that appears just a few pixels across. On Dec. 1, 2014, Ceres was about nine pixels in diameter, nearly perfect for this calibration. The images provide data on very subtle optical properties of the camera that scientists will use when they analyze and interpret the details of some of the pictures returned from orbit. Ceres is the bright spot in the center of the image. Because the dwarf planet is much brighter than the stars in the background, the camera team selected a long exposure time to make the stars visible. The long exposure made Ceres appear overexposed, and exaggerated its size; this was corrected by superimposing a shorter exposure of the dwarf planet in the center of the image. A cropped, magnified view of Ceres appears in the inset image at lower left. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19050
NASA Astrophysics Data System (ADS)
Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung
2018-06-01
The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.
Baker, Stokes S.; Vidican, Cleo B.; Cameron, David S.; Greib, Haittam G.; Jarocki, Christine C.; Setaputri, Andres W.; Spicuzza, Christopher H.; Burr, Aaron A.; Waqas, Meriam A.; Tolbert, Danzell A.
2012-01-01
Background and aims Studies have shown that levels of green fluorescent protein (GFP) leaf surface fluorescence are directly proportional to GFP soluble protein concentration in transgenic plants. However, instruments that measure GFP surface fluorescence are expensive. The goal of this investigation was to develop techniques with consumer digital cameras to analyse GFP surface fluorescence in transgenic plants. Methodology Inexpensive filter cubes containing machine vision dichroic filters and illuminated with blue light-emitting diodes (LED) were designed to attach to digital single-lens reflex (SLR) camera macro lenses. The apparatus was tested on purified enhanced GFP, and on wild-type and GFP-expressing arabidopsis grown autotrophically and heterotrophically. Principal findings Spectrum analysis showed that the apparatus illuminates specimens with wavelengths between ∼450 and ∼500 nm, and detects fluorescence between ∼510 and ∼595 nm. Epifluorescent photographs taken with SLR digital cameras were able to detect red-shifted GFP fluorescence in Arabidopsis thaliana leaves and cotyledons of pot-grown plants, as well as roots, hypocotyls and cotyledons of etiolated and light-grown plants grown heterotrophically. Green fluorescent protein fluorescence was detected primarily in the green channel of the raw image files. Studies with purified GFP produced linear responses to both protein surface density and exposure time (H0: β (slope) = 0 mean counts per pixel (ng s mm−2)−1, r2 > 0.994, n = 31, P < 1.75 × 10−29). Conclusions Epifluorescent digital photographs taken with complementary metal-oxide-semiconductor and charge-coupled device SLR cameras can be used to analyse red-shifted GFP surface fluorescence using visible blue light. This detection device can be constructed with inexpensive commercially available materials, thus increasing the accessibility of whole-organism GFP expression analysis to research laboratories and teaching institutions with small budgets. PMID:22479674
NASA Astrophysics Data System (ADS)
Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz
2017-02-01
In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.
Near-infrared face recognition utilizing open CV software
NASA Astrophysics Data System (ADS)
Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.
2014-06-01
Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.
Into the blue: AO science with MagAO in the visible
NASA Astrophysics Data System (ADS)
Close, Laird M.; Males, Jared R.; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Wu, Ya-Lin; Kopon, Derek; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando
2014-08-01
We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little astronomical science done in the visible with AO until recently. The most productive visible AO system to date is our 6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary system at the Magellan 6.5m in Chile. This secondary has 585 actuators with < 1 msec response times (0.7 ms typically). We use a pyramid wavefront sensor. The relatively small actuator pitch (~23 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). We use a CCD AO science camera called "VisAO". On-sky long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R < 8 mag stars. These relatively high visible wavelength Strehls are made possible by our powerful combination of a next generation ASM and a Pyramid WFS with 378 controlled modes and 1000 Hz loop frequency. We'll review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and refereed publications in both broad-band (r,i,z,Y) and at Halpha for exoplanets, protoplanetary disks, young stars, and emission line jets. These examples highlight the power of visible AO to probe circumstellar regions/spatial resolutions that would otherwise require much larger diameter telescopes with classical infrared AO cameras.
An infra-red imaging system for the analysis of tropisms in Arabidopsis thaliana seedlings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orbovic, V.; Poff, K.L.
1990-05-01
Since blue and green light will induce phototropism and red light is absorbed by phytochrome, no wavelength of visible radiation should be considered safe for any study of tropisms in etiolated seedlings. For this reason, we have developed an infra-red imaging system with a video camera with which we can monitor seedlings using radiation at wavelengths longer than 800 nm. The image of the seedlings can be observed in real time, recorded on a VCR and subsequently analyzed using the Java image analysis system. The time courses for curvature of seedlings differ in shape, amplitude, and lag time. This variabilitymore » accounts for much of the noise in the measurement of curvature for a population of seedlings.« less
1989-08-21
This picture of Neptune was produced from images taken through the ultraviolet, violet and green filters of the Voyager 2 wide-angle camera. This 'false' color image has been made to show clearly details of the cloud structure and to paint clouds located at different altitudes with different colors. Dark, deeplying clouds tend to be masked in the ultraviolet wavelength since overlying air molecules are particularly effective in scattering sunlight there which brightens the sky above them. Such areas appear dark blue in this photo. The Great Dark Spot (GDS) and the high southern latitudes have a deep bluish cast in this image, indication they are regions where visible light (but not ultraviolet light) may penetrate to a deeper layer of dark cloud or haze in Neptune's atmosphere. Conversely, the pinkish clouds may be positioned at high altitudes.
Solar corona/prominence seen through the White Light Coronograph
1974-01-17
S74-15697 (17 Jan. 1974) --- The solar corona and a solar prominence as seen through the White Light Coronograph, Skylab Experiment S052, on Jan. 17, 1974. This view was reproduced from a television transmission made by a TV camera aboard the Skylab space station in Earth orbit. The bright spot is a burn in the vidicon. The solar corona is the halo around the sun which is normally visible only at the time of solar eclipse by the moon. The Skylab coronography uses an externally-mounted disk system which occults the brilliant solar surface while allowing the fainter radiation of the corona to enter an annulus and be photographed. A mirror system allows either TV viewing of the corona or photographic recording of the image. Photo credit: NASA
Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications
NASA Astrophysics Data System (ADS)
Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.
2013-05-01
The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.
Simultaneous Spectral Temporal Adaptive Raman Spectrometer - SSTARS
NASA Technical Reports Server (NTRS)
Blacksberg, Jordana
2010-01-01
Raman spectroscopy is a prime candidate for the next generation of planetary instruments, as it addresses the primary goal of mineralogical analysis, which is structure and composition. However, large fluorescence return from many mineral samples under visible light excitation can render Raman spectra unattainable. Using the described approach, Raman and fluorescence, which occur on different time scales, can be simultaneously obtained from mineral samples using a compact instrument in a planetary environment. This new approach is taken based on the use of time-resolved spectroscopy for removing the fluorescence background from Raman spectra in the laboratory. In the SSTARS instrument, a visible excitation source (a green, pulsed laser) is used to generate Raman and fluorescence signals in a mineral sample. A spectral notch filter eliminates the directly reflected beam. A grating then disperses the signal spectrally, and a streak camera provides temporal resolution. The output of the streak camera is imaged on the CCD (charge-coupled device), and the data are read out electronically. By adjusting the sweep speed of the streak camera, anywhere from picoseconds to milliseconds, it is possible to resolve Raman spectra from numerous fluorescence spectra in the same sample. The key features of SSTARS include a compact streak tube capable of picosecond time resolution for collection of simultaneous spectral and temporal information, adaptive streak tube electronics that can rapidly change from one sweep rate to another over ranges of picoseconds to milliseconds, enabling collection of both Raman and fluorescence signatures versus time and wavelength, and Synchroscan integration that allows for a compact, low-power laser without compromising ultimate sensitivity.
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
NASA Astrophysics Data System (ADS)
Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina
2018-01-01
Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.
Improving NIR snow pit stratigraphy observations by introducing a controlled NIR light source
NASA Astrophysics Data System (ADS)
Dean, J.; Marshall, H.; Rutter, N.; Karlson, A.
2013-12-01
Near-infrared (NIR) photography in a prepared snow pit measures mm-/grain-scale variations in snow structure, as reflectivity is strongly dependent on microstructure and grain size at the NIR wavelengths. We explore using a controlled NIR light source to maximize signal to noise ratio and provide uniform incident, diffuse light on the snow pit wall. NIR light fired from the flash is diffused across and reflected by an umbrella onto the snow pit; the lens filter transmits NIR light onto the spectrum-modified sensor of the DSLR camera. Lenses are designed to refract visible light properly, not NIR light, so there must be a correction applied for the subsequent NIR bright spot. To avoid interpolation and debayering algorithms automatically performed by programs like Adobe's Photoshop on the images, the raw data are analyzed directly in MATLAB. NIR image data show a doubling of the amount of light collected in the same time for flash over ambient lighting. Transitions across layer boundaries in the flash-lit image are detailed by higher camera intensity values than ambient-lit images. Curves plotted using median intensity at each depth, normalized to the average profile intensity, show a separation between flash- and ambient-lit images in the upper 10-15 cm; the ambient-lit image curve asymptotically approaches the level of the flash-lit image curve below 15cm. We hypothesize that the difference is caused by additional ambient light penetrating the upper 10-15 cm of the snowpack from above and transmitting through the wall of the snow pit. This indicates that combining NIR ambient and flash photography could be a powerful technique for studying penetration depth of radiation as a function of microstructure and grain size. The NIR flash images do not increase the relative contrast at layer boundaries; however, the flash more than doubles the amount of recorded light and controls layer noise as well as layer boundary transition noise.
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
Chen, Brian R; Poon, Emily; Alam, Murad
2018-01-01
Lighting is an important component of consistent, high-quality dermatologic photography. There are different types of lighting solutions available. To evaluate currently available lighting equipment and methods suitable for procedural dermatology. Overhead lighting, built-in camera flashes, external flash units, studio strobes, and light-emitting diode (LED) light panels were evaluated with regard to their utility for dermatologic surgeons. A set of ideal lighting characteristics was used to examine the capabilities and limitations of each type of lighting solution. Recommendations regarding lighting solutions and optimal usage configurations were made in terms of the context of the clinical environment and the purpose of the image. Overhead lighting may be a convenient option for general documentation. An on-camera lighting solution using a built-in camera flash or a camera-mounted external flash unit provides portability and consistent lighting with minimal training. An off-camera lighting solution with studio strobes, external flash units, or LED light panels provides versatility and even lighting with minimal shadows and glare. The selection of an optimal lighting solution is contingent on practical considerations and the purpose of the image.
Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.
2016-10-17
Pandora is seen here, in isolation beside Saturn's kinked and constantly changing F ring. Pandora (near upper right) is 50 miles (81 kilometers) wide. The moon has an elongated, potato-like shape (see PIA07632). Two faint ringlets are visible within the Encke Gap, near lower left. The gap is about 202 miles (325 kilometers) wide. The much narrower Keeler Gap, which lies outside the Encke Gap, is maintained by the diminutive moon Daphnis (not seen here). This view looks toward the sunlit side of the rings from about 23 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Aug. 12, 2016. The view was acquired at a distance of approximately 907,000 miles (1.46 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 113 degrees. Image scale is 6 miles (9 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20504
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
2017-11-27
These two images illustrate just how far Cassini traveled to get to Saturn. On the left is one of the earliest images Cassini took of the ringed planet, captured during the long voyage from the inner solar system. On the right is one of Cassini's final images of Saturn, showing the site where the spacecraft would enter the atmosphere on the following day. In the left image, taken in 2001, about six months after the spacecraft passed Jupiter for a gravity assist flyby, the best view of Saturn using the spacecraft's high-resolution (narrow-angle) camera was on the order of what could be seen using the Earth-orbiting Hubble Space Telescope. At the end of the mission (at right), from close to Saturn, even the lower resolution (wide-angle) camera could capture just a tiny part of the planet. The left image looks toward Saturn from 20 degrees below the ring plane and was taken on July 13, 2001 in wavelengths of infrared light centered at 727 nanometers using the Cassini spacecraft narrow-angle camera. The view at right is centered on a point 6 degrees north of the equator and was taken in visible light using the wide-angle camera on Sept. 14, 2017. The view on the left was acquired at a distance of approximately 317 million miles (510 million kilometers) from Saturn. Image scale is about 1,900 miles (3,100 kilometers) per pixel. The view at right was acquired at a distance of approximately 360,000 miles (579,000 kilometers) from Saturn. Image scale is 22 miles (35 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21353
2004-04-22
A montage of Cassini images, taken in four different regions of the spectrum from ultraviolet to near-infrared, demonstrates that there is more to Saturn than meets the eye. The pictures show the effects of absorption and scattering of light at different wavelengths by both atmospheric gas and clouds of differing heights and thicknesses. They also show absorption of light by colored particles mixed with white ammonia clouds in the planet's atmosphere. Contrast has been enhanced to aid visibility of the atmosphere. Cassini's narrow-angle camera took these four images over a period of 20 minutes on April 3, 2004, when the spacecraft was 44.5 million kilometers (27.7 million miles) from the planet. The image scale is approximately 267 kilometers (166 miles) per pixel. All four images show the same face of Saturn. In the upper left image, Saturn is seen in ultraviolet wavelengths (298 nanometers); at upper right, in visible blue wavelengths (440 nanometers); at lower left, in far red wavelengths just beyond the visible-light spectrum (727 nanometers; and at lower right, in near-infrared wavelengths (930 nanometers). The sliver of light seen in the northern hemisphere appears bright in the ultraviolet and blue (top images) and is nearly invisible at longer wavelengths (bottom images). The clouds in this part of the northern hemisphere are deep, and sunlight is illuminating only the cloud-free upper atmosphere. The shorter wavelengths are consequently scattered by the gas and make the illuminated atmosphere bright, while the longer wavelengths are absorbed by methane. Saturn's rings also appear noticeably different from image to image, whose exposure times range from two to 46 seconds. The rings appear dark in the 46-second ultraviolet image because they inherently reflect little light at these wavelengths. The differences at other wavelengths are mostly due to the differences in exposure times. http://photojournal.jpl.nasa.gov/catalog/PIA05388
Messy interviews: changing conditions for politicians’ visibility on the web
Kroon, Åsa; Eriksson, Göran
2016-01-01
This article provides an updated analysis relating to John B. Thompson’s argument about political visibility and fragility. It does so in light of recent years’ development of communication technologies and the proliferation of nonbroadcasting media organizations producing TV. Instances of a new mediated encounter for politicians is analyzed in detail – the live web interview – produced and streamed by two Swedish tabloids during election campaigning 2014. It is argued that the live web interview is not yet a recognizable ‘communicative activity type’ with an obvious set of norms, rules, and routines. This fact makes politicians more intensely exposed to moments of mediated fragility which may be difficult to control. The most crucial condition that changes how politicians are able to manage their visibility is the constantly rolling ‘non-exclusive’ live camera which does not give the politician any room for error. The tabloids do not seem to mind ‘things going a bit wrong’ while airing; rather, interactional flaws are argued to be part and parcel of the overall web TV performance. PMID:29708105
2016-09-19
Pan may be small as satellites go, but like many of Saturn's ring moons, it has a has a very visible effect on the rings. Pan (17 miles or 28 kilometers across, left of center) holds open the Encke gap and shapes the ever-changing ringlets within the gap (some of which can be seen here). In addition to raising waves in the A and B rings, other moons help shape the F ring, the outer edge of the A ring and open the Keeler gap. This view looks toward the sunlit side of the rings from about 8 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 2, 2016. The view was acquired at a distance of approximately 840,000 miles (1.4 million kilometers) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 128 degrees. Image scale is 5 miles (8 kilometers) per pixel. Pan has been brightened by a factor of two to enhance its visibility. http://photojournal.jpl.nasa.gov/catalog/PIA20499
NICMOS PEERS INTO HEART OF DYING STAR
NASA Technical Reports Server (NTRS)
2002-01-01
The Egg Nebula, also known as CRL 2688, is shown on the left as it appears in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2) and on the right as it appears in infrared light with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Since infrared light is invisible to humans, the NICMOS image has been assigned colors to distinguish different wavelengths: blue corresponds to starlight reflected by dust particles, and red corresponds to heat radiation emitted by hot molecular hydrogen. Objects like the Egg Nebula are helping astronomers understand how stars like our Sun expel carbon and nitrogen -- elements crucial for life -- into space. Studies on the Egg Nebula show that these dying stars eject matter at high speeds along a preferred axis and may even have multiple jet-like outflows. The signature of the collision between this fast-moving material and the slower outflowing shells is the glow of hydrogen molecules captured in the NICMOS image. The distance between the tip of each jet is approximately 200 times the diameter of our solar system (out to Pluto's orbit). Credits: Rodger Thompson, Marcia Rieke, Glenn Schneider, Dean Hines (University of Arizona); Raghvendra Sahai (Jet Propulsion Laboratory); NICMOS Instrument Definition Team; and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
A new radiometric unit of measure to characterize SWIR illumination
NASA Astrophysics Data System (ADS)
Richards, A.; Hübner, M.
2017-05-01
We propose a new radiometric unit of measure we call the `swux' to unambiguously characterize scene illumination in the SWIR spectral band between 0.8μm-1.8μm, where most of the ever-increasing numbers of deployed SWIR cameras (based on standard InGaAs focal plane arrays) are sensitive. Both military and surveillance applications in the SWIR currently suffer from a lack of a standardized SWIR radiometric unit of measure that can be used to definitively compare or predict SWIR camera performance with respect to SNR and range metrics. We propose a unit comparable to the photometric illuminance lux unit; see Ref. [1]. The lack of a SWIR radiometric unit becomes even more critical if one uses lux levels to describe SWIR sensor performance at twilight or even low light condition, since in clear, no-moon conditions in rural areas, the naturally-occurring SWIR radiation from nightglow produces a much higher irradiance than visible starlight. Thus, even well-intentioned efforts to characterize a test site's ambient illumination levels in the SWIR band may fail based on photometric instruments that only measure visible light. A study of this by one of the authors in Ref. [2] showed that the correspondence between lux values and total SWIR irradiance in typical illumination conditions can vary by more than two orders of magnitude, depending on the spectrum of the ambient background. In analogy to the photometric lux definition, we propose the SWIR irradiance equivalent `swux' level, derived by integration over the scene SWIR spectral irradiance weighted by a spectral sensitivity function S(λ), a SWIR analog of the V(λ) photopic response function.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images
Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.
2015-01-01
Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025
Sub-micrometer resolution proximity X-ray microscope with digital image registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chkhalo, N. I.; Salashchenko, N. N.; Sherbakov, A. V., E-mail: SherbakovAV@ipm.sci-nnov.ru
A compact laboratory proximity soft X-ray microscope providing submicrometer spatial resolution and digital image registration is described. The microscope consists of a laser-plasma soft X-ray radiation source, a Schwarzschild objective to illuminate the test sample, and a two-coordinate detector for image registration. Radiation, which passes through the sample under study, generates an absorption image on the front surface of the detector. Optical ceramic YAG:Ce was used to convert the X-rays into visible light. An image was transferred from the scintillator to a charge-coupled device camera with a Mitutoyo Plan Apo series lens. The detector’s design allows the use of lensesmore » with numerical apertures of NA = 0.14, 0.28, and 0.55 without changing the dimensions and arrangement of the elements of the device. This design allows one to change the magnification, spatial resolution, and field of view of the X-ray microscope. A spatial resolution better than 0.7 μm and an energy conversion efficiency of the X-ray radiation with a wavelength of 13.5 nm into visible light collected by the detector of 7.2% were achieved with the largest aperture lens.« less
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
A review of astronomical science with visible light adaptive optics
NASA Astrophysics Data System (ADS)
Close, Laird M.
2016-07-01
We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little (<1 paper/yr) night-time astronomical science published with AO in the visible from 2000-2013 (outside of the solar or Space Surveillance Astronomy communities where visible AO is the norm, but not the topic of this invited review). However, since mid-2013 there has been a rapid increase visible AO with over 50 refereed science papers published in just 2.5 years (visible AO is experiencing a rapid growth rate very similar to that of NIR AO science from 1997-2000 Close 2000). Currently the most productive small (D < 2 m) visible light AO telescope is the UV-LGS Robo-AO system (Baranec, et al. 2016) on the robotic Palomar D=1.5 m telescope (currently relocated to the Kitt Peak 1.8m; Salama et al. 2016). Robo-AO uniquely offers the ability to target >15 objects/hr, which has enabled large (>3000 discrete targets) companion star surveys and has resulted in 23 refereed science publications. The most productive large telescope visible AO system is the D=6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary Mirror (ASM) AO system at the Magellan 6.5m in Chile (Morzinski et al. 2016). This ASM secondary has 585 actuators with < 1 msec response times (0.7 ms typically). MagAO utilizes a 1 kHz pyramid wavefront sensor. The relatively small actuator pitch ( 22 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). Long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R <= 9 mag stars. These capabilities have led to over 22 MagAO refereed science publications in the visible. The largest (D=8m) telescope to achieve regular visible AO science is SPHERE/ZIMPOL. ZIMPOL is a polarimeter fed by the 1.2 kHz SPHERE ExAO system (Fusco et al. 2016). ZIMPOL's ability to differentiate scattered polarized light from starlight allows the sensitive detection of circumstellar disks, stellar surfaces, and envelopes of evolved AGB stars. Here we review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and science results in the fields of: exoplanet detection; circumstellar and protoplanetary disks; young stars; AGB stars; emission line jets; and stellar surfaces. The recent rapid increase in the scientific publications and power of visible AO is due to the maturity of the next-generation of AO systems and our new ability probe circumstellar regions with very high (10-30 mas) spatial resolutions that would otherwise require much larger (>10m) diameter telescopes in the infrared.
2004-01-01
Released to commemorate the 14th anniversary of NASA’s Hubble Space Telescope (HST) is the image of a galaxy cataloged as AM 0644-741. Resembling a diamond encrusted bracelet, the ring of brilliant blue star clusters wraps around a yellowish nucleus of what was once a normal spiral galaxy. Located 300 million light years away in the direction of the southern constellation Dorado, the sparkling blue ring is 150,000 light years in diameter, making it larger than our entire home galaxy, the Milky Way. Ring galaxies are a striking example of how collisions between galaxies can dramatically change their structure, while triggering the formation of new stars. Typically one galaxy plunges directly into the disk of another one. The ring that pierced through this galaxy’s ring is out of the image but is visible in larger-field images. The soft galaxy visible to the left of the ring galaxy is a coincidental background galaxy which is not interacting with the ring. Rampant star formation explains why the ring is so blue. It is continuously forming massive, young, hot stars. Another sign of robust star formation is the pink regions along the ring. These are rare clouds of glowing hydrogen gas, fluorescing because of the strong ultraviolet light from the newly formed stars. The Hubble Heritage Team used the Hubble Advanced Camera for Surveys to take this image using a combination of four separate filters that isolate blue, green, red, and near-infrared light to create the color image.
Characterization of SWIR cameras by MRC measurements
NASA Astrophysics Data System (ADS)
Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.
2014-05-01
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
Høye, Gudrun; Fridman, Andrei
2013-05-06
Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Group Delay Tracking with the Sydney University Stellar Interferometer
NASA Astrophysics Data System (ADS)
Lawson, Peter R.
1994-08-01
The Sydney University Stellar Interferometer (SUSI) is a long baseline optical interferometer, located at the Paul Wild Observatory near Narrabri, in northern New South Wales, Australia. It is designed to measure stellar angular diameters using light collected from a pair of siderostats, with 11 fixed siderostats giving separations between 5 and 640 m. Apertures smaller than Fried's coherence length, r_0, are used and active tilt-compensation is employed. This ensures that when the beams are combined in the pupil plane the wavefronts are parallel. Fringes are detected when the optical path-difference between the arriving wavefronts is less than tne coherence length of light used for the observation. While observing a star it is necessary to compensate for the changes in pathlength due to the earth's rotation. It is also highly desirable to compensate for path changes due to the effects of atmospheric turbulence. Tracking the path-difference permits an accurate calibration of the fringe visibility, allows larger bandwidths to be used, and therefore improves the sensitivity of the instrument. I describe a fringe tracking system which I developed for SUSI, based on group delay tracking with a PAPA (Precision Analog Photon Address) detector. The method uses short exposure images of fringes, 1-10 ms, detected in the dispersed spectra of the combined starlight. The number of fringes across a fixed bandwidth of channeled spectrum is directly proportional to the path-difference between the arriving wavefronts. A Fast Fourier Transform, implemented in hardware, is used to calculate the spatial power spectrum of the fringes, thereby locating the delay. The visibility loss due to a non-constant fringe spacing on the detector is investigated, and the improvements obtained from rebinning the photon data are shown. The low light level limitations of group delay tracking are determined theoretically with emphasis on the probability of tracking error, rather than the signal-to-noise ratio. Experimental results from both laboratory studies and stellar observations are presented. These show the first closed-loop operation of a fringe tracking system based on observations of group delay with a stellar interferometer. The Sydney University PAPA camera, a photon counting array detector developed for use in this work, is also described. The design principles of the PAPA camera are outlined and the potential sources of image artifacts are identified. The artifacts arise from the use of optical encoding with Gray coded masks, and teh new camera is distinguished by its mask-plate, which was designed to overcome artifacts due to vignetting. Nw lens mounts are also presented which permit a simplified optical alignment without the need for tilt-plates. The performance of the camera is described. (SECTION: Dissertation Summaries)
Energy-efficient lighting system for television
Cawthorne, Duane C.
1987-07-21
A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
NASA Astrophysics Data System (ADS)
Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.
2009-12-01
Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.
Limmen, Roxane M; Ceelen, Manon; Reijnders, Udo J L; Joris Stomp, S; de Keijzer, Koos C; Das, Kees
2013-03-01
The use of narrow-banded visible light sources in improving the visibility of injuries has been hardly investigated, and studies examining the extent of this improvement are lacking. In this study, narrow-banded beams of light within the visible light spectrum were used to explore their ability in improving the visibility of external injuries. The beams of light were induced by four crime-lites(®) providing narrow-banded beams of light between 400 and 550 nm. The visibility of the injuries was assessed through specific long-pass filters supplied with the set of crime-lites(®) . Forty-three percent of the examined injuries improved in visibility by using the narrow-banded visible light. In addition, injuries were visualized that were not visible or just barely visible to the naked eye. The improvements in visibility were particularly marked with the use of crime-lites(®) "violet" and "blue" covering the spectrum between 400-430 and 430-470 nm. The simple noninvasive method showed a great potential contribution in injury examination. © 2012 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Passive radiation detection using optically active CMOS sensors
NASA Astrophysics Data System (ADS)
Dosiek, Luke; Schalk, Patrick D.
2013-05-01
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Calibration of the venµs super-spectral camera
NASA Astrophysics Data System (ADS)
Topaz, Jeremy; Sprecher, Tuvia; Tinto, Francesc; Echeto, Pierre; Hagolle, Olivier
2017-11-01
A high-resolution super-spectral camera is being developed by Elbit Systems in Israel for the joint CNES- Israel Space Agency satellite, VENμS (Vegetation and Environment monitoring on a new Micro-Satellite). This camera will have 12 narrow spectral bands in the Visible/NIR region and will give images with 5.3 m resolution from an altitude of 720 km, with an orbit which allows a two-day revisit interval for a number of selected sites distributed over some two-thirds of the earth's surface. The swath width will be 27 km at this altitude. To ensure the high radiometric and geometric accuracy needed to fully exploit such multiple data sampling, careful attention is given in the design to maximize characteristics such as signal-to-noise ratio (SNR), spectral band accuracy, stray light rejection, inter- band pixel-to-pixel registration, etc. For the same reasons, accurate calibration of all the principle characteristics is essential, and this presents some major challenges. The methods planned to achieve the required level of calibration are presented following a brief description of the system design. A fuller description of the system design is given in [2], [3] and [4].
Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)
NASA Astrophysics Data System (ADS)
Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.
2012-10-01
A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.
Looking at Art in the IR and UV
NASA Astrophysics Data System (ADS)
Falco, Charles
2013-03-01
Starting with the very earliest cave paintings art has been created to be viewed by the unaided eye and, until very recently, it wasn't even possible to see it at wavelengths outside the visible spectrum. However, it is now possible to view paintings, sculptures, manuscripts, and other cultural artifacts at wavelengths from the x-ray, through the ultraviolet (UV), to well into the infrared (IR). Further, thanks to recent advances in technology, this is becoming possible with hand-held instruments that can be used in locations that were previously inaccessible to anything but laboratory-scale image capture equipment. But, what can be learned from such ``non-visible'' images? In this talk I will briefly describe the characteristics of high resolution UV and IR imaging systems I developed for this purpose by modifying high resolution digital cameras. The sensitivity of the IR camera makes it possible to obtain images of art ``in situ'' with standard museum lighting, resolving features finer than 0.35 mm on a 1.0x0.67 m painting. I also have used both it and the UV camera in remote locations with battery-powered illumination sources. I will illustrate their capabilities with images of various examples of Western, Asian, and Islamic art in museums on three continents, describing how these images have revealed important new information about the working practices of artists as famous as Jan van Eyck. I also will describe what will be possible for this type of work with new capabilities that could be developed within the next few years. This work is based on a collaboration with David Hockney, and benefitted from image analys research supported by ARO grant W911NF-06-1-0359-P00001.
1986-01-17
Range : 9.1 million miles (5.7 million miles) P-29478C These two images pictures of Uranus, one in true color and the other in false color, were shot by Voyager 2's narrow angle camera. The picture at left has been processed to show Uranus as the human eye would see from the vantage point of the spacecraft. The image is a composite of shots taken through blue, green, and orange filters. The darker shadings on the upper right of the disk correspond to day-night boundaries on the planet. Beyond this boundary lies the hidden northern hemisphere of Uranus, which currently remains in total darkness as the planet rotates. The blue-green color results from the aborption of red light by methane gas in Uranus' deep, cold, and remarkably clear atmosphere. The picture at right uses false color and extreme contrast to bring out subtle details in the polar region of Uranus. Images obtained through ultraviolet, violet, and orange filters were respectively converted to the same blue, green, and red colors used to produce the picture at left. The very slight contrasts visible in true color are greatly exaggerated here. In this false colr picture, Uranus reveals a dark polar hood surrounded by aseries of progressively lighter concentric bands. One possible explanation is that a brownish haze or smog, concentrated around the pole, is arranged into bands of zonal motions of the upper atmosphere. Several artifacts of the optics and processing are visible. The occasional donut shapes are shadows cast by dust in the camera optics;the processing needed to bring ot faint features also bring out camera blemishes. in addition, the bright pink strip at the lower edge of the planets limb is an artifact of the image enhancement. In fact, the limb is dark and uniform in color around the planet.
Two Moons and the Pleiades from Mars
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Inverted image of two moons and the Pleiades from Mars Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit recently settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. In this view, the Pleiades, a star cluster also known as the 'Seven Sisters,' is visible in the lower left corner. The bright star Aldebaran and some of the stars in the constellation Taurus are visible on the right. Spirit acquired this image the evening of martian day, or sol, 590 (Aug. 30, 2005). The image on the right provides an enhanced-contrast view with annotation. Within the enhanced halo of light is an insert of an unsaturated view of Phobos taken a few images later in the same sequence. On Mars, Phobos would be easily visible to the naked eye at night, but would be only about one-third as large as the full Moon appears from Earth. Astronauts staring at Phobos from the surface of Mars would notice its oblong, potato-like shape and that it moves quickly against the background stars. Phobos takes only 7 hours, 39 minutes to complete one orbit of Mars. That is so fast, relative to the 24-hour-and-39-minute sol on Mars (the length of time it takes for Mars to complete one rotation), that Phobos rises in the west and sets in the east. Earth's moon, by comparison, rises in the east and sets in the west. The smaller martian moon, Deimos, takes 30 hours, 12 minutes to complete one orbit of Mars. That orbital period is longer than a martian sol, and so Deimos rises, like most solar system moons, in the east and sets in the west. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this composite with the panoramic camera, using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
Science-Filters Study of Martian Rock Sees Hematite
2017-11-01
This false-color image demonstrates how use of special filters available on the Mast Camera (Mastcam) of NASA's Curiosity Mars rover can reveal the presence of certain minerals in target rocks. It is a composite of images taken through three "science" filters chosen for making hematite, an iron-oxide mineral, stand out as exaggerated purple. This target rock, called "Christmas Cove," lies in an area on Mars' "Vera Rubin Ridge" where Mastcam reconnaissance imaging (see PIA22065) with science filters suggested a patchy distribution of exposed hematite. Bright lines within the rocks are fractures filled with calcium sulfate minerals. Christmas Cove did not appear to contain much hematite until the rover team conducted an experiment on this target: Curiosity's wire-bristled brush, the Dust Removal Tool, scrubbed the rock, and a close-up with the Mars Hand Lens Imager (MAHLI) confirmed the brushing. The brushed area is about is about 2.5 inches (6 centimeters) across. The next day -- Sept. 17, 2017, on the mission's Sol 1819 -- this observation with Mastcam and others with the Chemistry and Camera (ChemCam showed a strong hematite presence that had been subdued beneath the dust. The team is continuing to explore whether the patchiness in the reconnaissance imaging may result more from variations in the amount of dust cover rather than from variations in hematite content. Curiosity's Mastcam combines two cameras: one with a telephoto lens and the other with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for this image admits light from a narrow band of wavelengths, extending to only about 5 nanometers longer or shorter than the filter's central wavelength. Three observations are combined for this image, each through one of the filters centered at 751 nanometers (in the near-infrared part of the spectrum just beyond red light), 527 nanometers (green) and 445 nanometers (blue). Usual color photographs from digital cameras -- such as a Mastcam one of this same place (see PIA22067) -- also combine information from red, green and blue filtering, but the filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. Mastcam's narrow-band filters used for this view help to increase spectral contrast, making blues bluer and reds redder, particularly with the processing used to boost contrast in each of the component images of this composite. Fine-grained hematite preferentially absorbs sunlight around in the green portion of the spectrum around 527 nanometers. That gives it the purple look from a combination of red and blue light reflected by the hematite and reaching the camera through the other two filters. https://photojournal.jpl.nasa.gov/catalog/PIA22066
Visible Light Induces Melanogenesis in Human Skin through a Photoadaptive Response.
Randhawa, Manpreet; Seo, InSeok; Liebel, Frank; Southall, Michael D; Kollias, Nikiforos; Ruvolo, Eduardo
2015-01-01
Visible light (400-700 nm) lies outside of the spectral range of what photobiologists define as deleterious radiation and as a result few studies have studied the effects of visible light range of wavelengths on skin. This oversight is important considering that during outdoors activities skin is exposed to the full solar spectrum, including visible light, and to multiple exposures at different times and doses. Although the contribution of the UV component of sunlight to skin damage has been established, few studies have examined the effects of non-UV solar radiation on skin physiology in terms of inflammation, and limited information is available regarding the role of visible light on pigmentation. The purpose of this study was to determine the effect of visible light on the pro-pigmentation pathways and melanin formation in skin. Exposure to visible light in ex-vivo and clinical studies demonstrated an induction of pigmentation in skin by visible light. Results showed that a single exposure to visible light induced very little pigmentation whereas multiple exposures with visible light resulted in darker and sustained pigmentation. These findings have potential implications on the management of photo-aggravated pigmentary disorders, the proper use of sunscreens, and the treatment of depigmented lesions.
NASA Astrophysics Data System (ADS)
Jantzen, Connie; Slagle, Rick
1997-05-01
The distinction between exposure time and sample rate is often the first point raised in any discussion of high speed imaging. Many high speed events require exposure times considerably shorter than those that can be achieved solely by the sample rate of the camera, where exposure time equals 1/sample rate. Gating, a method of achieving short exposure times in digital cameras, is often difficult to achieve for exposure time requirements shorter than 100 microseconds. This paper discusses the advantages and limitations of using the short duration light pulse of a near infrared laser with high speed digital imaging systems. By closely matching the output wavelength of the pulsed laser to the peak near infrared response of current sensors, high speed image capture can be accomplished at very low (visible) light levels of illumination. By virtue of the short duration light pulse, adjustable to as short as two microseconds, image capture of very high speed events can be achieved at relatively low sample rates of less than 100 pictures per second, without image blur. For our initial investigations, we chose a ballistic subject. The results of early experimentation revealed the limitations of applying traditional ballistic imaging methods when using a pulsed infrared lightsource with a digital imaging system. These early disappointing results clarified the need to further identify the unique system characteristics of the digital imager and pulsed infrared combination. It was also necessary to investigate how the infrared reflectance and transmittance of common materials affects the imaging process. This experimental work yielded a surprising, successful methodology which will prove useful in imaging ballistic and weapons tests, as well as forensics, flow visualizations, spray pattern analyses, and nocturnal animal behavioral studies.
Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.
McGregor, T J; Spence, D J; Coutts, D W
2008-01-01
We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Multi-Wavelength Views of Protostars in IC 1396
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Click on individual images below for larger view [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] NASA's Spitzer Space Telescope has captured a glowing stellar nursery within a dark globule that is opaque at visible light. These new images pierce through the obscuration to reveal the birth of new protostars, or embryonic stars, and young stars never before seen.The Elephant's Trunk Nebula is an elongated dark globule within the emission nebula IC 1396 in the constellation of Cepheus. Located at a distance of 2,450 light-years, the globule is a condensation of dense gas that is barely surviving the strong ionizing radiation from a nearby massive star. The globule is being compressed by the surrounding ionized gas.The large composite image above is a product of combining data from the observatory's multiband imaging photometer and the infrared array camera. The thermal emission at 24 microns measured by the photometer (red) is combined with near-infrared emission from the camera at 3.6/4.5 microns (blue) and from 5.8/8.0 microns (green). The colors of the diffuse emission and filaments vary, and are a combination of molecular hydrogen (which tends to be green) and polycyclic aromatic hydrocarbon (brown) emissions.Within the globule, a half dozen newly discovered protostars, or embryonic stars, are easily discernible as the bright red-tinted objects, mostly along the southern rim of the globule. These were previously undetected at visible wavelengths due to obscuration by the thick cloud ('globule body') and by dust surrounding the newly forming stars. The newborn stars form in the dense gas because of compression by the wind and radiation from a nearby massive star (located outside the field of view to the left). The winds from this unseen star are also responsible for producing the spectacular filamentary appearance of the globule itself.The Spitzer Space Telescope also sees many newly discovered young stars, often enshrouded in dust, which may be starting the nuclear fusion that defines a star. These young stars are too cool to be seen at visible wavelengths. Both the protostars and young stars are bright in the mid-infrared because of their surrounding discs of solid material. A few of the visible-light stars in this image were found to have excess infrared emission, suggesting they are more mature stars surrounded by primordial remnants from their formation, or from crumbling asteroids and comets in their planetary systems.MS Walheim poses with a Hasselblad camera on the flight deck of Atlantis during STS-110
2002-04-08
STS110-E-5017 (8 April 2002) --- Astronaut Rex J. Walheim, STS-110 mission specialist, holds a camera on the aft flight deck of the Space Shuttle Atlantis. A blue and white Earth is visible through the overhead windows of the orbiter. The image was taken with a digital still camera.
Flammability Limits of Gases Under Low Gravity Conditions
NASA Technical Reports Server (NTRS)
Strehlow, R. A.
1985-01-01
The purpose of this combustion science investigation is to determine the effect of zero, fractional, and super gravity on the flammability limits of a premixed methane air flame in a standard 51 mm diameter flammability tube and to determine, if possible, the fluid flow associated with flame passage under zero-g conditions and the density (and hence, temperature) profiles associated with the flame under conditions of incipient extinction. This is accomplished by constructing an appropriate apparatus for placement in NASA's Lewis Research Center Lear Jet facility and flying the prescribed g-trajectories while the experiment is being performed. Data is recorded photographically using the visible light of the flame. The data acquired is: (1) the shape and propagation velocity of the flame under various g-conditions for methane compositions that are inside the flammable limits, and (2) the effect of gravity on the limits. Real time accelerometer readings for the three orthogonal directions are displayed in full view of the cameras and the framing rate of the cameras is used to measure velocities.
Live event reconstruction in an optically read out GEM-based TPC
NASA Astrophysics Data System (ADS)
Brunbauer, F. M.; Galgóczi, G.; Gonzalez Diaz, D.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.
2018-04-01
Combining strong signal amplification made possible by Gaseous Electron Multipliers (GEMs) with the high spatial resolution provided by optical readout, highly performing radiation detectors can be realized. An optically read out GEM-based Time Projection Chamber (TPC) is presented. The device permits 3D track reconstruction by combining the 2D projections obtained with a CCD camera with timing information from a photomultiplier tube. Owing to the intuitive 2D representation of the tracks in the images and to automated control, data acquisition and event reconstruction algorithms, the optically read out TPC permits live display of reconstructed tracks in three dimensions. An Ar/CF4 (80/20%) gas mixture was used to maximize scintillation yield in the visible wavelength region matching the quantum efficiency of the camera. The device is integrated in a UHV-grade vessel allowing for precise control of the gas composition and purity. Long term studies in sealed mode operation revealed a minor decrease in the scintillation light intensity.
Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer
NASA Astrophysics Data System (ADS)
Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin
2017-12-01
An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.
2015-10-15
NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A companion view from the wide-angle camera (PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but high-resolution Cassini images show a landscape of stark contrasts. Thin cracks cross over the pole -- the northernmost extent of a global system of such fractures. Before this Cassini flyby, scientists did not know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 4,000 miles (6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19660
1989-08-19
Range : 8.6 million kilometers (5.3 million miles) The Voyager took this 61 second exposure through the clear filter with the narrow angle camera of Neptune. The Voyager cameras were programmed to make a systematic search for faint ring arcs and new satellites. The bright upper corner of the image is due to a residual image from a previous long exposure of the planet. The portion of the arc visible here is approximately 35 degrees in longitudinal extent, making it approximately 38,000 kilometers (24,000 miles) in length, and is broken up into three segments separated from each other by approximately 5 degrees. The trailing edge is at the upper right and has an abrupt end while the leading edge seems to fade into the background more gradually. This arc orbits very close to one of the newly discovered Neptune satellites, 1989N4. Close-up studies of this ring arc will be carried out in the coming days which will give higher spatial resolution at different lighting angles. (JPL Ref: P-34617)
Statistical Analysis of an Infrared Thermography Inspection of Reinforced Carbon-Carbon
NASA Technical Reports Server (NTRS)
Comeaux, Kayla
2011-01-01
Each piece of flight hardware being used on the shuttle must be analyzed and pass NASA requirements before the shuttle is ready for launch. One tool used to detect cracks that lie within flight hardware is Infrared Flash Thermography. This is a non-destructive testing technique which uses an intense flash of light to heat up the surface of a material after which an Infrared camera is used to record the cooling of the material. Since cracks within the material obstruct the natural heat flow through the material, they are visible when viewing the data from the Infrared camera. We used Ecotherm, a software program, to collect data pertaining to the delaminations and analyzed the data using Ecotherm and University of Dayton Log Logistic Probability of Detection (POD) Software. The goal was to reproduce the statistical analysis produced by the University of Dayton software, by using scatter plots, log transforms, and residuals to test the assumption of normality for the residuals.
Apparent minification in an imaging display under reduced viewing conditions.
Meehan, J W
1993-01-01
When extended outdoor scenes are imaged with magnification of 1 in optical, electronic, or computer-generated displays, scene features appear smaller and farther than in direct view. This has been shown to occur in various periscopic and camera-viewfinder displays outdoors in daylight. In four experiments it was found that apparent minification of the size of a planar object at a distance of 3-9 m indoors occurs in the viewfinder display of an SLR camera both in good light and in darkness with only the luminous object visible. The effect is robust and survives changes in the relationship between object luminance in the display and in direct view and occurs in the dark when subjects have no prior knowledge of room dimensions, object size or object distance. The results of a fifth experiment suggest that the effect is an instance of reduced visual size constancy consequent on elimination of cues for size, which include those for distance.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
A versatile indirect detector design for hard X-ray microimaging
NASA Astrophysics Data System (ADS)
Douissard, P.-A.; Cecilia, A.; Rochet, X.; Chapel, X.; Martin, T.; van de Kamp, T.; Helfen, L.; Baumbach, T.; Luquot, L.; Xiao, X.; Meinhardt, J.; Rack, A.
2012-09-01
Indirect X-ray detectors are of outstanding importance for high resolution imaging, especially at synchrotron light sources: while consisting mostly of components which are widely commercially available, they allow for a broad range of applications in terms of the X-ray energy employed, radiation dose to the detector, data acquisition rate and spatial resolving power. Frequently, an indirect detector consists of a thin-film single crystal scintillator and a high-resolution visible light microscope as well as a camera. In this article, a novel modular-based indirect design is introduced, which offers several advantages: it can be adapted for different cameras, i.e. different sensor sizes, and can be trimmed to work either with (quasi-)monochromatic illumination and the correspondingly lower absorbed dose or with intense white beam irradiation. In addition, it allows for a motorized quick exchange between different magnifications / spatial resolutions. Developed within the European project SCINTAX, it is now commercially available. The characteristics of the detector in its different configurations (i.e. for low dose or for high dose irradiation) as measured within the SCINTAX project will be outlined. Together with selected applications from materials research, non-destructive evaluation and life sciences they underline the potential of this design to make high resolution X-ray imaging widely available.
2015-06-15
The two large craters on Tethys, near the line where day fades to night, almost resemble two giant eyes observing Saturn. The location of these craters on Tethys' terminator throws their topography into sharp relief. Both are large craters, but the larger and southernmost of the two shows a more complex structure. The angle of the lighting highlights a central peak in this crater. Central peaks are the result of the surface reacting to the violent post-impact excavation of the crater. The northern crater does not show a similar feature. Possibly the impact was too small to form a central peak, or the composition of the material in the immediate vicinity couldn't support the formation of a central peak. In this image Tethys is significantly closer to the camera, while the planet is in the background. Yet the moon is still utterly dwarfed by the giant Saturn. This view looks toward the anti-Saturn side of Tethys. North on Tethys is up and rotated 42 degrees to the right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on April 11, 2015. The view was obtained at a distance of approximately 75,000 miles (120,000 kilometers) from Tethys. Image scale at Tethys is 4 miles (7 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18318
Stargazing at 'Husband Hill Observatory' on Mars
NASA Technical Reports Server (NTRS)
2005-01-01
NASA's Mars Exploration Rover Spirit continues to take advantage of extra solar energy by occasionally turning its cameras upward for night sky observations. Most recently, Spirit made a series of observations of bright star fields from the summit of 'Husband Hill' in Gusev Crater on Mars. Scientists use the images to assess the cameras' sensitivity and to search for evidence of nighttime clouds or haze. The image on the left is a computer simulation of the stars in the constellation Orion. The next three images are actual views of Orion captured with Spirit's panoramic camera during exposures of 10, 30, and 60 seconds. Because Spirit is in the southern hemisphere of Mars, Orion appears upside down compared to how it would appear to viewers in the Northern Hemisphere of Earth. 'Star trails' in the longer exposures are a result of the planet's rotation. The faintest stars visible in the 60-second exposure are about as bright as the faintest stars visible with the naked eye from Earth (about magnitude 6 in astronomical terms). The Orion Nebula, famous as a nursery of newly forming stars, is also visible in these images. Bright streaks in some parts of the images aren't stars or meteors or unidentified flying objects, but are caused by solar and galactic cosmic rays striking the camera's detector. Spirit acquired these images with the panoramic camera on Martian day, or sol, 632 (Oct. 13, 2005) at around 45 minutes past midnight local time, using the camera's broadband filter (wavelengths of 739 nanometers plus or minus 338 nanometers).Experimental Estimation of CLASP Spatial Resolution: Results of the Instrument's Optical Alignment
NASA Technical Reports Server (NTRS)
Giono, Gabrial; Katsukawa, Yukio; Ishikawa, Ryoko; Narukage, Noriyuki; Bando, Takamasa; Kano, Ryohei; Suematsu, Yoshinori; Kobayashi, Ken; Winebarger, Amy; Auchere, Frederic
2015-01-01
The Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP) is a sounding-rocket experiment currently being built at the National Astronomical Observatory of Japan. This instrument aims to probe for the first time the magnetic field strength and orientation in the solar upper-chromosphere and lower-transition region. CLASP will measure the polarization of the Lyman-Alpha line (121.6nm) with an unprecedented accuracy, and derive the magnetic field information through the Hanle effect. Although polarization accuracy and spectral resolution are crucial for the Hanle effect detection, spatial resolution is also important to get reliable context image via the slit-jaw camera. As spatial resolution is directly related with the alignment of optics, it is also a good way of ensuring the alignment of the instrument to meet the scientific requirement. This poster will detail the experiments carried out to align CLASP's optics (telescope and spectrograph), as both part of the instrument were aligned separately. The telescope was aligned in double-pass mode, and a laser interferometer (He-Ne) was used to measure the telescope's wavefront error (WFE). The secondary mirror tilt and position were adjusted to remove comas and defocus aberrations from the WFE. Effect of gravity on the WFE measurement was estimated and the final WFE derived in zero-g condition for CLASP telescope will be presented. In addition, an estimation of the spot shape and size derived from the final WFE will also be shown. The spectrograph was aligned with a custom procedure: because Ly-??light is absorbed by air, the spectrograph's off-axis parabolic mirrors were aligned in Visible Light (VL) using a custom-made VL grating instead of the flight Ly-? grating. Results of the alignment in Visible Light will be shown and the spot shape recorded with CCDs at various position along the slit will be displayed. Results from both alignment experiment will be compared to the design requirement, and will be combined in order to estimate CLASP spatial resolution after its alignment in visible light.
Predicting and Managing Lighting and Visibility for Human Operations in Space
NASA Technical Reports Server (NTRS)
Maida, James C.; Peacock, Brian
2003-01-01
Lighting is critical to human visual performance. On earth this problem is well understood and solutions are well defined and executed. Because the sun rises and sets on average every 45 minutes during Earth orbit, humans working in space must cope with extremely dynamic lighting conditions varying from very low light conditions to severe glare and contrast conditions. For critical operations, it is essential that lighting conditions be predictable and manageable. Mission planners need to detelmine whether low-light video cameras are required or whether additional luminaires, or lamps, need to be flown . Crew and flight directors need to have up to date daylight orbit time lines showing the best and worst viewing conditions for sunlight and shadowing. Where applicable and possible, lighting conditions need to be part of crew training. In addition, it is desirable to optimize the quantity and quality of light because of the potential impacts on crew safety, delivery costs, electrical power and equipment maintainability for both exterior and interior conditions. Addressing these issues, an illumination modeling system has been developed in the Space Human Factors Laboratory at ASA Johnson Space Center. The system is the integration of a physically based ray-tracing package ("Radiance"), developed at the Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system developed by NASA and an extensive database of humans and their work environments. Measured and published data has been collected for exterior and interior surface reflectivity; luminaire beam spread distribution, color and intensity and video camera light sensitivity and has been associated with their corresponding geometric models. Selecting an eye-point and one or more light sources, including sun and earthshine, a snapshot of the light energy reaching the surfaces or reaching the eye point is computed. This energy map is then used to extract the required information needed for useful predictions. Using a validated, comprehensive illumination model integrated with empirically derived data, predictions of lighting and viewing conditions have been successfully used for Shuttle and Space Station planning and assembly operations. It has successfully balanced the needs for adequate human performance with the utili zation of resources. Keywords: Modeling, ray tracing, luminaires, refl ectivity, luminance, illuminance.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Fisher, Benjamin W.
2015-01-01
Many U.S. schools use visible security measures (security cameras, metal detectors, security personnel) in an effort to keep schools safe and promote adolescents' academic success. This study examined how different patterns of visible security utilization were associated with U.S. middle and high school students' academic performance, attendance,…
NASA Technical Reports Server (NTRS)
Behar, Alberto; Carsey, Frank; Lane, Arthur; Engelhardt, Herman
2006-01-01
An instrumentation system has been developed for studying interactions between a glacier or ice sheet and the underlying rock and/or soil. Prior borehole imaging systems have been used in well-drilling and mineral-exploration applications and for studying relatively thin valley glaciers, but have not been used for studying thick ice sheets like those of Antarctica. The system includes a cylindrical imaging probe that is lowered into a hole that has been bored through the ice to the ice/bedrock interface by use of an established hot-water-jet technique. The images acquired by the cameras yield information on the movement of the ice relative to the bedrock and on visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-data-transmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At thime of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the adjacent water and ice.
Visible Light Induces Melanogenesis in Human Skin through a Photoadaptive Response
Randhawa, Manpreet; Seo, InSeok; Liebel, Frank; Southall, Michael D.; Kollias, Nikiforos; Ruvolo, Eduardo
2015-01-01
Visible light (400–700 nm) lies outside of the spectral range of what photobiologists define as deleterious radiation and as a result few studies have studied the effects of visible light range of wavelengths on skin. This oversight is important considering that during outdoors activities skin is exposed to the full solar spectrum, including visible light, and to multiple exposures at different times and doses. Although the contribution of the UV component of sunlight to skin damage has been established, few studies have examined the effects of non-UV solar radiation on skin physiology in terms of inflammation, and limited information is available regarding the role of visible light on pigmentation. The purpose of this study was to determine the effect of visible light on the pro-pigmentation pathways and melanin formation in skin. Exposure to visible light in ex-vivo and clinical studies demonstrated an induction of pigmentation in skin by visible light. Results showed that a single exposure to visible light induced very little pigmentation whereas multiple exposures with visible light resulted in darker and sustained pigmentation. These findings have potential implications on the management of photo-aggravated pigmentary disorders, the proper use of sunscreens, and the treatment of depigmented lesions. PMID:26121474
Common aperture multispectral spotter camera: Spectro XR
NASA Astrophysics Data System (ADS)
Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor
2017-10-01
The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.
Lethal effects of short-wavelength visible light on insects.
Hori, Masatoshi; Shibuya, Kazuki; Sato, Mitsunari; Saito, Yoshino
2014-12-09
We investigated the lethal effects of visible light on insects by using light-emitting diodes (LEDs). The toxic effects of ultraviolet (UV) light, particularly shortwave (i.e., UVB and UVC) light, on organisms are well known. However, the effects of irradiation with visible light remain unclear, although shorter wavelengths are known to be more lethal. Irradiation with visible light is not thought to cause mortality in complex animals including insects. Here, however, we found that irradiation with short-wavelength visible (blue) light killed eggs, larvae, pupae, and adults of Drosophila melanogaster. Blue light was also lethal to mosquitoes and flour beetles, but the effective wavelength at which mortality occurred differed among the insect species. Our findings suggest that highly toxic wavelengths of visible light are species-specific in insects, and that shorter wavelengths are not always more toxic. For some animals, such as insects, blue light is more harmful than UV light.
Lethal effects of short-wavelength visible light on insects
NASA Astrophysics Data System (ADS)
Hori, Masatoshi; Shibuya, Kazuki; Sato, Mitsunari; Saito, Yoshino
2014-12-01
We investigated the lethal effects of visible light on insects by using light-emitting diodes (LEDs). The toxic effects of ultraviolet (UV) light, particularly shortwave (i.e., UVB and UVC) light, on organisms are well known. However, the effects of irradiation with visible light remain unclear, although shorter wavelengths are known to be more lethal. Irradiation with visible light is not thought to cause mortality in complex animals including insects. Here, however, we found that irradiation with short-wavelength visible (blue) light killed eggs, larvae, pupae, and adults of Drosophila melanogaster. Blue light was also lethal to mosquitoes and flour beetles, but the effective wavelength at which mortality occurred differed among the insect species. Our findings suggest that highly toxic wavelengths of visible light are species-specific in insects, and that shorter wavelengths are not always more toxic. For some animals, such as insects, blue light is more harmful than UV light.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Adaptive optics at the Subaru telescope: current capabilities and development
NASA Astrophysics Data System (ADS)
Guyon, Olivier; Hayano, Yutaka; Tamura, Motohide; Kudo, Tomoyuki; Oya, Shin; Minowa, Yosuke; Lai, Olivier; Jovanovic, Nemanja; Takato, Naruhisa; Kasdin, Jeremy; Groff, Tyler; Hayashi, Masahiko; Arimoto, Nobuo; Takami, Hideki; Bradley, Colin; Sugai, Hajime; Perrin, Guy; Tuthill, Peter; Mazin, Ben
2014-08-01
Current AO observations rely heavily on the AO188 instrument, a 188-elements system that can operate in natural or laser guide star (LGS) mode, and delivers diffraction-limited images in near-IR. In its LGS mode, laser light is transported from the solid state laser to the launch telescope by a single mode fiber. AO188 can feed several instruments: the infrared camera and spectrograph (IRCS), a high contrast imaging instrument (HiCIAO) or an optical integral field spectrograph (Kyoto-3DII). Adaptive optics development in support of exoplanet observations has been and continues to be very active. The Subaru Coronagraphic Extreme-AO (SCExAO) system, which combines extreme-AO correction with advanced coronagraphy, is in the commissioning phase, and will greatly increase Subaru Telescope's ability to image and study exoplanets. SCExAO currently feeds light to HiCIAO, and will soon be combined with the CHARIS integral field spectrograph and the fast frame MKIDs exoplanet camera, which have both been specifically designed for high contrast imaging. SCExAO also feeds two visible-light single pupil interferometers: VAMPIRES and FIRST. In parallel to these direct imaging activities, a near-IR high precision spectrograph (IRD) is under development for observing exoplanets with the radial velocity technique. Wide-field adaptive optics techniques are also being pursued. The RAVEN multi-object adaptive optics instrument was installed on Subaru telescope in early 2014. Subaru Telescope is also planning wide field imaging with ground-layer AO with the ULTIMATE-Subaru project.
NASA Astrophysics Data System (ADS)
Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.
2012-10-01
Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.
NASA Astrophysics Data System (ADS)
Severson, Scott A.; Choi, Philip I.; Badham, Katherine E.; Bolger, Dalton; Contreras, Daniel S.; Gilbreth, Blaine N.; Guerrero, Christian; Littleton, Erik; Long, Joseph; McGonigle, Lorcan P.; Morrison, William A.; Ortega, Fernando; Rudy, Alex R.; Wong, Jonathan R.; Spjut, Erik; Baranec, Christoph; Riddle, Reed
2014-07-01
We present the instrument design and first light observations of KAPAO, a natural guide star adaptive optics (AO) system for the Pomona College Table Mountain Observatory (TMO) 1-meter telescope. The KAPAO system has dual science channels with visible and near-infrared cameras, a Shack-Hartmann wavefront sensor, and a commercially available 140-actuator MEMS deformable mirror. The pupil relays are two pairs of custom off-axis parabolas and the control system is based on a version of the Robo-AO control software. The AO system and telescope are remotely operable, and KAPAO is designed to share the Cassegrain focus with the existing TMO polarimeter. We discuss the extensive integration of undergraduate students in the program including the multiple senior theses/capstones and summer assistantships amongst our partner institutions. This material is based upon work supported by the National Science Foundation under Grant No. 0960343.
NASA Technical Reports Server (NTRS)
King, I. R.; Deharveng, J. M.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.
1992-01-01
A 5161 s exposure was taken with the FOC on the central 44 arcsec of M31, through a filter centered at 1750 A. Much of the light is redleak from visible wavelengths, but nearly half of it is genuine UV. The image shows the same central peak found earlier by Stratoscope, with a somewhat steeper dropoff outside that peak. More than 100 individual objects are seen, some pointlike and some slightly extended. We identify them as post-asymptotic giant branch stars, some of them surrounded by a contribution from their accompanying planetary nebulae. These objects contribute almost a fifth of the total UV light, but fall far short of accounting for all of it. We suggest that the remainder may result from the corresponding evolutionary tracks in a population more metal-rich than solar.
Forgery Detection and Value Identification of Euro Banknotes
Bruna, Arcangelo; Farinella, Giovanni Maria; Guarnera, Giuseppe Claudio; Battiato, Sebastiano
2013-01-01
This paper describes both hardware and software components to detect counterfeits of Euro banknotes. The proposed system is also able to recognize the banknote values. Differently than other state-of-the-art methods, the proposed approach makes use of banknote images acquired with a near infrared camera to perform recognition and authentication. This allows one to build a system that can effectively deal with real forgeries, which are usually not detectable with visible light. The hardware does not use any mechanical parts, so the overall system is low-cost. The proposed solution is reliable for ambient light and banknote positioning. Users should simply lean the banknote to be analyzed on a flat glass, and the system detects forgery, as well as recognizes the banknote value. The effectiveness of the proposed solution has been properly tested on a dataset composed by genuine and fake Euro banknotes provided by Italy's central bank. PMID:23429514
A neural-based remote eye gaze tracker under natural head motion.
Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso
2008-10-01
A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.
NASA Technical Reports Server (NTRS)
Trauger, John T.
2005-01-01
Eclipse is a proposed NASA Discovery mission to perform a sensitive imaging survey of nearby planetary systems, including a survey for jovian-sized planets orbiting Sun-like stars to distances of 15 pc. We outline the science objectives of the Eclipse mission and review recent developments in the key enabling technologies. Eclipse is a space telescope concept for high-contrast visible-wavelength imaging and spectrophotometry. Its design incorporates a telescope with an unobscured aperture of 1.8 meters, a coronographic camera for suppression of diffracted light, and precise active wavefront correction for the suppression of scattered background light. For reference, Eclipse is designed to reduce the diffracted and scattered starlight between 0.33 and 1.5 arcseconds from the star by three orders of magnitude compared to any HST instrument. The Eclipse mission provides precursor science exploration and technology experience in support of NASA's Terrestrial Planet Finder (TPF) program.
Development of integrated semiconductor optical sensors for functional brain imaging
NASA Astrophysics Data System (ADS)
Lee, Thomas T.
Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.
Precise Selenodetic Coordinate System on Artificial Light Refers
NASA Astrophysics Data System (ADS)
Bagrov, Alexander; Pichkhadze, Konstantin M.; Sysoev, Valentin
Historically a coordinate system for the Moon was established on the base of telescopic observations from the Earth. As the angular resolution of Earth-to-Space telescopic observations is limited by Earth atmosphere, and is ordinary worse then 1 ang. second, the mean accuracy of selenodetic coordinates is some angular minutes, which corresponds to errors about 900 meters for positions of lunar objects near center of visible lunar disk, and at least twice more when objects are near lunar poles. As there are no Global Positioning System nor any astronomical observation instruments on the Moon, we proposed to use an autonomous light beacon on the Luna-Globe landing module to fix its position on the surface of the moon ant to use it as refer point for fixation of spherical coordinates system for the Moon. The light beacon is designed to be surely visible by orbiting probe TV-camera. As any space probe has its own stars-orientation system, there is not a problem to calculate a set of directions to the beacon and to the referent stars in probe-centered coordinate system during flight over the beacon. Large number of measured angular positions and time of each observation will be enough to calculate both orbital parameters of the probe and selenodetic coordinates of the beacon by methods of geodesy. All this will allow fixing angular coordinates of any feature of lunar surface in one global coordinate system, referred to the beacon. The satellite’s orbit plane contains ever the center mass of main body, so if the beacon will be placed closely to a lunar pole, we shall determine pole point position of the Moon with accuracy tens times better then it is known now. When angular accuracy of self-orientation by stars of the orbital module of Luna-Glob mission will be 6 angular seconds, then being in circular orbit with height of 200 km the on-board TV-camera will allow calculation of the beacon position as well as 6" corresponding to spatial resolution of the camera. It mean that coordinates of the beacon will be determined with accuracy not worse then 6 meters on the lunar surface. Much more accuracy can be achieved if orbital probe will use as precise angular measurer as optical interferometer. The limiting accuracy of proposed method is far above any reasonable level, because it may be sub-millimeter one. Theoretical analysis shows that for achievement of 1-meter accuracy of coordinate measuring over lunar globe it will be enough to disperse over it surface some 60 light beacons. Designed by Lavochkin Association light beacon is autonomous one, and it will work at least 10 years, so coordinate frame of any other lunar mission could use established selenodetic coordinates during this period. The same approach may be used for establishing Martial coordinates system.
NASA Astrophysics Data System (ADS)
Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell
2013-06-01
We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.
Local adaptive contrast enhancement for color images
NASA Astrophysics Data System (ADS)
Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer
2007-04-01
A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.
M33: A Close Neighbor Reveals its True Size and Splendor
NASA Technical Reports Server (NTRS)
2009-01-01
One of our closest galactic neighbors shows its awesome beauty in this new image from NASA's Spitzer Space Telescope. M33, also known as the Triangulum Galaxy, is a member of what's known as our Local Group of galaxies. Along with our own Milky Way, this group travels together in the universe, as they are gravitationally bound. In fact, M33 is one of the few galaxies that is moving toward the Milky Way despite the fact that space itself is expanding, causing most galaxies in the universe to grow farther and farther apart. When viewed with Spitzer's infrared eyes, this elegant spiral galaxy sparkles with color and detail. Stars appear as glistening blue gems (many of which are actually foreground stars in our own galaxy), while dust in the spiral disk of the galaxy glows pink and red. But not only is this new image beautiful, it also shows M33 to be surprising large bigger than its visible-light appearance would suggest. With its ability to detect cold, dark dust, Spitzer can see emission from cooler material well beyond the visible range of M33's disk. Exactly how this cold material moved outward from the galaxy is still a mystery, but winds from giant stars or supernovas may be responsible. M33 is located about 2.9 million light-years away in the constellation Triangulum. This composite image was taken by Spitzer's infrared array camera. The color blue indicates infrared light of 3.6 microns, green shows 4.5-micron light, and red 8.0 microns.Tropical Depression 6 (Florence) in the Atlantic
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Microwave ImageVisible Light Image
These infrared, microwave, and visible images were created with data retrieved by the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite. Infrared Image Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red). Microwave Image AIRS data used to create the microwave images come from the microwave radiation emitted by Earth's atmosphere which is then received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm, where there are either some clouds or no clouds, indicate where the sea surface shines through. Vis/NIR Image The AIRS instrument suite contains a sensor that captures light in the visible/near-infrared portion of the electromagnetic spectrum. These 'visible' images are similar to a snapshot taken with your camera. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.Non-optically combined multispectral source for IR, visible, and laser testing
NASA Astrophysics Data System (ADS)
Laveigne, Joe; Rich, Brian; McHugh, Steve; Chua, Peter
2010-04-01
Electro Optical technology continues to advance, incorporating developments in infrared and laser technology into smaller, more tightly-integrated systems that can see and discriminate military targets at ever-increasing distances. New systems incorporate laser illumination and ranging with gated sensors that allow unparalleled vision at a distance. These new capabilities augment existing all-weather performance in the mid-wave infrared (MWIR) and long-wave infrared (LWIR), as well as low light level visible and near infrared (VNIR), giving the user multiple means of looking at targets of interest. There is a need in the test industry to generate imagery in the relevant spectral bands, and to provide temporal stimulus for testing range-gated systems. Santa Barbara Infrared (SBIR) has developed a new means of combining a uniform infrared source with uniform laser and visible sources for electro-optics (EO) testing. The source has been designed to allow laboratory testing of surveillance systems incorporating an infrared imager and a range-gated camera; and for field testing of emerging multi-spectral/fused sensor systems. A description of the source will be presented along with performance data relating to EO testing, including output in pertinent spectral bands, stability and resolution.
Dust grain characterization — Direct measurement of light scattering
NASA Astrophysics Data System (ADS)
BartoÅ, P.; Pavlů, J.
2018-01-01
Dust grains play a key role in dusty plasma since they interact with the plasma we can use them to study plasma itself. The grains are illuminated by visible light (e.g., a laser sheet) and the situation is captured with camera. Despite of simplicity, light scattering on similar-to-wavelength sized grains is complex phenomenon. Interaction of the electromagnetic wave with material has to be computed with respect to Maxwell equations — analytic solution is nowadays available only for several selected shapes like sphere, coated sphere, or infinite cylinder. Moreover, material constants needed for computations are usually unknown. For computation result verification and material constant determination, we designed and developed a device directly measuring light scattering profiles. Single dust grains are trapped in the ultrasonic field (so called "acoustic levitation") and illuminated by the laser beam. Scattered light is then measured by a photodiode mounted on rotating platform. Synchronous detection is employed for a noise reduction. This setup brings several benefits against conventional methods: (1) it works in the free air, (2) the measured grain is captured for a long time, and (3) the grain could be of arbitrary shape.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas
ERIC Educational Resources Information Center
Hayden, Lance Alan
2009-01-01
This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…
An approach to instrument qualified visual range
NASA Astrophysics Data System (ADS)
Courtade, Benoît; Bonnet, Jordan; Woodruff, Chris; Larson, Josiah; Giles, Andrew; Sonde, Nikhil; Moore, C. J.; Schimon, David; Harris, David Money; Pond, Duane; Way, Scott
2008-04-01
This paper describes a system that calculates aircraft visual range with instrumentation alone. A unique message is encoded using modified binary phase shift keying and continuously flashed at high speed by ALSF-II runway approach lights. The message is sampled at 400 frames per second by an aircraft borne high-speed camera. The encoding is designed to avoid visible flicker and minimize frame rate. Instrument qualified visual range is identified as the largest distance at which the aircraft system can acquire and verify the correct, runway-specific signal. Scaled testing indicates that if the system were implemented on one full ALSF-II fixture, instrument qualified range could be established at 5 miles in clear weather conditions.
Pluto-Charon: Infrared Reflectance from 3.6 to 8.0 Micrometers
NASA Technical Reports Server (NTRS)
Cruikshank, Dale P.; Emery, Joshua P.; Stansberry, John A.; VanCleve, Jeffrey E.
2004-01-01
We have measured the spectral reflectance of the Pluto-Charon pair at 3.6, 4.5, 5.8, and 8.0 micrometers with the Infrared Array Camera (IRAC) (G. G. Fazzio et al. Ap.J.Supp. 154, 10-17, 2004) on the Spitzer Space Telescope (STS), at eight different longitudes that cover a full rotation of the planet. STS does not have sufficient resolution to separate the light from the planet and the satellite. The image of the Pluto-Charon pair is clearly visible at each of the four wavelengths. We will discuss the spectral reflectance in terms of models that include the known components of Pluto and Charon s surfaces, and evidence for diurnal variations.
Jupiter From 2.8 Million Miles
2016-08-25
This dual view of Jupiter was taken on August 23, when NASA's Juno spacecraft was 2.8 million miles (4.4 million kilometers) from the gas giant planet on the inbound leg of its initial 53.5-day capture orbit. The image on the left is a color composite taken with Junocam's visible red, green, and blue filters. The image on the right was also taken by JunoCam, but uses the camera's infrared filter, which is sensitive to the abundance of methane in the atmosphere. Bright features like the planet's Great Red Spot are higher in the atmosphere, and so have less of their light absorbed by the methane. http://photojournal.jpl.nasa.gov/catalog/PIA20884
Clear New View of a Classic Spiral
NASA Astrophysics Data System (ADS)
2010-05-01
ESO is releasing a beautiful image of the nearby galaxy Messier 83 taken by the HAWK-I instrument on ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile. The picture shows the galaxy in infrared light and demonstrates the impressive power of the camera to create one of the sharpest and most detailed pictures of Messier 83 ever taken from the ground. The galaxy Messier 83 (eso0825) is located about 15 million light-years away in the constellation of Hydra (the Sea Serpent). It spans over 40 000 light-years, only 40 percent the size of the Milky Way, but in many ways is quite similar to our home galaxy, both in its spiral shape and the presence of a bar of stars across its centre. Messier 83 is famous among astronomers for its many supernovae: vast explosions that end the lives of some stars. Over the last century, six supernovae have been observed in Messier 83 - a record number that is matched by only one other galaxy. Even without supernovae, Messier 83 is one of the brightest nearby galaxies, visible using just binoculars. Messier 83 has been observed in the infrared part of the spectrum using HAWK-I [1], a powerful camera on ESO's Very Large Telescope (VLT). When viewed in infrared light most of the obscuring dust that hides much of Messier 83 becomes transparent. The brightly lit gas around hot young stars in the spiral arms is also less prominent in infrared pictures. As a result much more of the structure of the galaxy and the vast hordes of its constituent stars can be seen. This clear view is important for astronomers looking for clusters of young stars, especially those hidden in dusty regions of the galaxy. Studying such star clusters was one of the main scientific goals of these observations [2]. When compared to earlier images, the acute vision of HAWK-I reveals far more stars within the galaxy. The combination of the huge mirror of the VLT, the large field of view and great sensitivity of the camera, and the superb observing conditions at ESO's Paranal Observatory makes HAWK-I one of the most powerful near-infrared imagers in the world. Astronomers are eagerly queuing up for the chance to use the camera, which began operation in 2007 (eso0736), and to get some of the best ground-based infrared images ever of the night sky. Notes [1] HAWK-I stands for High-Acuity Wide-field K-band Imager. More technical details about the camera can be found in an earlier press release (eso0736). [2] The data used to prepare this image were acquired by a team led by Mark Gieles (University of Cambridge) and Yuri Beletsky (ESO). Mischa Schirmer (University of Bonn) performed the challenging data processing. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A
2016-11-01
A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .
Dark Globule in IC 1396 (IRAC)
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Click on image for larger view of inset NASA's Spitzer Space Telescope image of a glowing stellar nursery provides a spectacular contrast to the opaque cloud seen in visible light (inset). The Elephant's Trunk Nebula is an elongated dark globule within the emission nebula IC 1396 in the constellation of Cepheus. Located at a distance of 2,450 light-years, the globule is a condensation of dense gas that is barely surviving the strong ionizing radiation from a nearby massive star. The globule is being compressed by the surrounding ionized gas. The dark globule is seen in silhouette at visible-light wavelengths, backlit by the illumination of a bright star located to the left of the field of view.The Spitzer Space Telescope pierces through the obscuration to reveal the birth of new protostars, or embryonic stars, and previously unseen young stars. The infrared image was obtained by Spitzer's infrared array camera. The image is a four-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8.0 microns (red). The filamentary appearance of the globule results from the sculpting effects of competing physical processes. The winds from a massive star, located to the left of the image, produce a dense circular rim comprising the 'head' of the globule and a swept-back tail of gas.A pair of young stars (LkHa 349 and LkHa 349c) that formed from the dense gas has cleared a spherical cavity within the globule head. While one of these stars is significantly fainter than the other in the visible-light image, they are of comparable brightness in the infrared Spitzer image. This implies the presence of a thick and dusty disc around LkHa 349c. Such circumstellar discs are the precursors of planetary systems. They are much thicker in the early stages of stellar formation when the placental planet-forming material (gas and dust) is still present.The application of high-speed photography in z-pinch high-temperature plasma diagnostics
NASA Astrophysics Data System (ADS)
Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei
2007-01-01
This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, W.C.
1996-04-30
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, William C.
1996-01-01
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.
Reductions in injury crashes associated with red light camera enforcement in oxnard, california.
Retting, Richard A; Kyrychenko, Sergey Y
2002-11-01
This study estimated the impact of red light camera enforcement on motor vehicle crashes in one of the first US communities to employ such cameras-Oxnard, California. Crash data were analyzed for Oxnard and for 3 comparison cities. Changes in crash frequencies were compared for Oxnard and control cities and for signalized and nonsignalized intersections by means of a generalized linear regression model. Overall, crashes at signalized intersections throughout Oxnard were reduced by 7% and injury crashes were reduced by 29%. Right-angle crashes, those most associated with red light violations, were reduced by 32%; right-angle crashes involving injuries were reduced by 68%. Because red light cameras can be a permanent component of the transportation infrastructure, crash reductions attributed to camera enforcement should be sustainable.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
2005-11-28
A fine spray of small, icy particles emanating from the warm, geologically unique province surrounding the south pole of Saturn’s moon Enceladus was observed in a Cassini narrow-angle camera image of the crescent moon taken on Jan. 16, 2005. Taken from a high-phase angle of 148 degrees -- a viewing geometry in which small particles become much easier to see -- the plume of material becomes more apparent in images processed to enhance faint signals. Imaging scientists have measured the light scattered by the plume's particles to determine their abundance and fall-off with height. Though the measurements of particle abundance are more certain within 100 kilometers (60 miles) of the surface, the values measured there are roughly consistent with the abundance of water ice particles measured by other Cassini instruments (reported in September, 2005) at altitudes as high as 400 kilometers (250 miles) above the surface. Imaging scientists, as reported in the journal Science on March 10, 2006, believe that the jets are geysers erupting from pressurized subsurface reservoirs of liquid water above 273 degrees Kelvin (0 degrees Celsius). The image at the left was taken in visible green light. A dark mask was applied to the moon's bright limb in order to make the plume feature easier to see. The image at the right has been color-coded to make faint signals in the plume more apparent. Images of other satellites (such as Tethys and Mimas) taken in the last 10 months from similar lighting and viewing geometries, and with identical camera parameters as this one, were closely examined to demonstrate that the plume towering above Enceladus' south pole is real and not a camera artifact. The images were acquired at a distance of about 209,400 kilometers (130,100 miles) from Enceladus. Image scale is about 1 kilometer (0.6 mile) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA07760
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2016-12-01
An effective early warning system to detect volcanic activity is an invaluable tool, but often very expensive. Detecting and monitoring precursory events, thermal signatures, and ongoing eruptions in near real-time is essential, but conventional methods are often logistically challenging, expensive, and difficult to maintain. Our investigation explores the use of `off the shelf' webcams and low-light cameras, operating in the visible to near-infrared portions of the electromagnetic spectrum, to detect and monitor volcanic incandescent activity. Large databases of webcam imagery already exist at institutions around the world, but are often extremely underutilised and we aim to change this. We focus on the early detection of thermal signatures at volcanoes, using automated scripts to analyse individual images for changes in pixel brightness, allowing us to detect relative changes in thermally incandescent activity. Primarily, our work focuses on freely available streams of webcam images from around the world, which we can download and analyse in near real-time. When changes in activity are detected, an alert is sent to the users informing them of the changes in activity and a need for further investigation. Although relatively rudimentary, this technique provides constant monitoring for volcanoes in remote locations and developing nations, where it is not financially viable to deploy expensive equipment. We also purchased several of our own cameras, which were extensively tested in controlled laboratory settings with a black body source to determine their individual spectral response. Our aim is to deploy these cameras at active volcanoes knowing exactly how they will respond to varying levels of incandescence. They are ideal for field deployments as they are cheap (0-1,000), consume little power, are easily replaced, and can provide telemetered near real-time data. Data from Shiveluch volcano, Russia and our spectral response lab experiments are presented here.
NASA Technical Reports Server (NTRS)
1978-01-01
In public and private archives throughout the world there are many historically important documents that have become illegible with the passage of time. They have faded, been erased, acquired mold, water and dirt stain, suffered blotting or lost readability in other ways. While ultraviolet and infrared photography are widely used to enhance deteriorated legibility, these methods are more limited in their effectiveness than the space-derived image enhancement technique. The aim of the JPL effort with Caltech and others is to better define the requirements for a system to restore illegible information for study at a low page-cost with simple operating procedures. The investigators' principle tools are a vidicon camera and an image processing computer program, the same equipment used to produce sharp space pictures. The camera is the same type as those on NASA's Mariner spacecraft which returned to Earth thousands of images of Mars, Venus and Mercury. Space imagery works something like television. The vidicon camera does not take a photograph in the ordinary sense; rather it "scans" a scene, recording different light and shade values which are reproduced as a pattern of dots, hundreds of dots to a line, hundreds of lines in the total picture. The dots are transmitted to an Earth receiver, where they are assembled line by line to form a picture like that on the home TV screen.
Label-free biodetection using a smartphone.
Gallegos, Dustin; Long, Kenneth D; Yu, Hojeong; Clark, Peter P; Lin, Yixiao; George, Sherine; Nath, Pabitra; Cunningham, Brian T
2013-06-07
Utilizing its integrated camera as a spectrometer, we demonstrate the use of a smartphone as the detection instrument for a label-free photonic crystal biosensor. A custom-designed cradle holds the smartphone in fixed alignment with optical components, allowing for accurate and repeatable measurements of shifts in the resonant wavelength of the sensor. Externally provided broadband light incident upon an entrance pinhole is subsequently collimated and linearly polarized before passing through the biosensor, which resonantly reflects only a narrow band of wavelengths. A diffraction grating spreads the remaining wavelengths over the camera's pixels to display a high resolution transmission spectrum. The photonic crystal biosensor is fabricated on a plastic substrate and attached to a standard glass microscope slide that can easily be removed and replaced within the optical path. A custom software app was developed to convert the camera images into the photonic crystal transmission spectrum in the visible wavelength range, including curve-fitting analysis that computes the photonic crystal resonant wavelength with 0.009 nm accuracy. We demonstrate the functionality of the system through detection of an immobilized protein monolayer, and selective detection of concentration-dependent antibody binding to a functionalized photonic crystal. We envision the capability for an inexpensive, handheld biosensor instrument with web connectivity to enable point-of-care sensing in environments that have not been practical previously.
MagAO: Status and on-sky performance of the Magellan adaptive optics system
NASA Astrophysics Data System (ADS)
Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Kopon, Derek; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Puglisi, Alfio; Pinna, Enrico; Briguglio, Runa; Xompero, Marco; Quirós-Pacheco, Fernando; Bailey, Vanessa; Follette, Katherine B.; Rodigas, T. J.; Wu, Ya-Lin; Arcidiacono, Carmelo; Argomedo, Javier; Busoni, Lorenzo; Hare, Tyson; Uomoto, Alan; Weinberger, Alycia
2014-07-01
MagAO is the new adaptive optics system with visible-light and infrared science cameras, located on the 6.5-m Magellan "Clay" telescope at Las Campanas Observatory, Chile. The instrument locks on natural guide stars (NGS) from 0th to 16th R-band magnitude, measures turbulence with a modulating pyramid wavefront sensor binnable from 28×28 to 7×7 subapertures, and uses a 585-actuator adaptive secondary mirror (ASM) to provide at wavefronts to the two science cameras. MagAO is a mutated clone of the similar AO systems at the Large Binocular Telescope (LBT) at Mt. Graham, Arizona. The high-level AO loop controls up to 378 modes and operates at frame rates up to 1000 Hz. The instrument has two science cameras: VisAO operating from 0.5-1μm and Clio2 operating from 1-5 μm. MagAO was installed in 2012 and successfully completed two commissioning runs in 2012-2013. In April 2014 we had our first science run that was open to the general Magellan community. Observers from Arizona, Carnegie, Australia, Harvard, MIT, Michigan, and Chile took observations in collaboration with the MagAO instrument team. Here we describe the MagAO instrument, describe our on-sky performance, and report our status as of summer 2014.
Noisy Ocular Recognition Based on Three Convolutional Neural Networks.
Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung
2017-12-17
In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.
Development of a 3-D visible limiter imaging system for the HSX stellarator
NASA Astrophysics Data System (ADS)
Buelo, C.; Stephey, L.; Anderson, F. S. B.; Eisert, D.; Anderson, D. T.
2017-12-01
A visible camera diagnostic has been developed to study the Helically Symmetric eXperiment (HSX) limiter plasma interaction. A straight line view from the camera location to the limiter was not possible due to the complex 3D stellarator geometry of HSX, so it was necessary to insert a mirror/lens system into the plasma edge. A custom support structure for this optical system tailored to the HSX geometry was designed and installed. This system holds the optics tube assembly at the required angle for the desired view to both minimize system stress and facilitate robust and repeatable camera positioning. The camera system has been absolutely calibrated and using Hα and C-III filters can provide hydrogen and carbon photon fluxes, which through an S/XB coefficient can be converted into particle fluxes. The resulting measurements have been used to obtain the characteristic penetration length of hydrogen and C-III species. The hydrogen λiz value shows reasonable agreement with the value predicted by a 1D penetration length calculation.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Spatial Frequency Domain Imaging: Applications in Preclinical Models of Alzheimer's Disease
NASA Astrophysics Data System (ADS)
Lin, Alexander Justin
A clinical challenge in Alzheimer's disease (AD) is diagnosing and treating patients earlier, before symptoms of cognitive dysfunction occur. A good screening test would be sensitive to the AD brain pathology, safe, and cost-effective. Diffuse optical imaging, which measures how non-ionizing light is absorbed and scattered in tissue, may fulfill these three parameters. We imaged the brains of transgenic AD mouse models in vivo with a quantitative, camera-based, diffuse optical imaging technology called spatial frequency domain imaging (SFDI) to characterize near-infrared (650-970nm) optical biomarkers of AD. Compared to age-matched control mice, we found a decrease in light absorption --- due to lower oxygenated and total hemoglobin concentrations in the brain --- correlating to decreased blood vessel volume and density in histology. Light scattering also increased in AD mice, correlating to brain structural changes caused by neuron loss and activation of inflammatory cells. Furthermore, inhaled gas challenges revealed brain vascular function was diminished. To investigate how AD affects the small changes in blood perfusion caused by increased brain activity, we built a new SFDI system from a commercial light-emitting diode microprojector and off-the-shelf optical components and cameras to measure optical properties in the visible range (460-632nm). Our measurements showed a reduced amplitude and duration of blood vessel dilation to increased brain activity in the AD mice. Altogether, this work increased our understanding of AD pathogenesis, explored optical biomarkers of AD, and improved technology access to other research labs. These results and technologies can further be used to facilitate longitudinal drug therapy trials in mice and provide a roadmap to diffuse optical spectroscopy studies in humans.
Soleymani, Teo; Cohen, David E; Folan, Lorcan M; Okereke, Uchenna R; Elbuluk, Nada; Soter, Nicholas A
2017-11-01
Background: While most of the attention regarding skin pigmentation has focused on the effects of ultraviolet radiation, the cutaneous effects of visible light (400 to 700nm) are rarely reported. The purpose of this study was to investigate the cutaneous pigmentary response to pure visible light irradiation, examine the difference in response to different sources of visible light irradiation, and determine a minimal pigmentary dose of visible light irradiation in melanocompetent subjects with Fitzpatrick skin type III - VI. The study was designed as a single arm, non-blinded, split-side dual intervention study in which subjects underwent visible light irradiation using LED and halogen incandescent light sources delivered at a fluence of 0.14 Watts/cm2 with incremental dose progression from 20 J/cm2 to 320 J/cm2. Pigmentation was assessed by clinical examination, cross-polarized digital photography, and analytic colorimetry. Immediate, dose-responsive pigment darkening was seen with LED light exposure in 80% of subjects, beginning at 60 Joules. No pigmentary changes were seen with halogen incandescent light exposure at any dose in any subject. This study is the first to report a distinct difference in cutaneous pigmentary response to different sources of visible light, and the first to demonstrate cutaneous pigment darkening from visible LED light exposure. Our findings raise the concern that our increasing daily artificial light surroundings may have clandestine effects on skin biology.
J Drugs Dermatol. 2017;16(11):1105-1110.
.Mastcam Special Filters Help Locate Variations Ahead
2017-11-01
This pair of images from the Mast Camera (Mastcam) on NASA's Curiosity rover illustrates how special filters are used to scout terrain ahead for variations in the local bedrock. The upper panorama is in the Mastcam's usual full color, for comparison. The lower panorama of the same scene, in false color, combines three exposures taken through different "science filters," each selecting for a narrow band of wavelengths. Filters and image processing steps were selected to make stronger signatures of hematite, an iron-oxide mineral, evident as purple. Hematite is of interest in this area of Mars -- partway up "Vera Rubin Ridge" on lower Mount Sharp -- as holding clues about ancient environmental conditions under which that mineral originated. In this pair of panoramas, the strongest indications of hematite appear related to areas where the bedrock is broken up. With information from this Mastcam reconnaissance, the rover team selected destinations in the scene for close-up investigations to gain understanding about the apparent patchiness in hematite spectral features. The Mastcam's left-eye camera took the component images of both panoramas on Sept. 12, 2017, during the 1,814th Martian day, or sol, of Curiosity's work on Mars. The view spans from south-southeast on the left to south-southwest on the right. The foreground across the bottom of the scene is about 50 feet (about 15 meters) wide. Figure 1 includes scale bars of 1 meter (3.3 feet) in the middle distance and 5 meters (16 feet) at upper right. Curiosity's Mastcam combines two cameras: the right eye with a telephoto lens and the left eye with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for the lower panorama shown here admits light from a narrow band of wavelengths, extending to only about 5 to 10 nanometers longer or shorter than the filter's central wavelength. The three observations combined into this product used filters centered at three near-infrared wavelengths: 751 nanometers, 867 nanometers and 1,012 nanometers. Hematite distinctively absorbs some frequencies of infrared light more than others. Usual color photographs from digital cameras -- such as the upper panorama here from Mastcam -- combine information from red, green and blue filtering. The filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. The colors of the upper panorama, as with most featured images from Mastcam, have been tuned with a color adjustment similar to white balancing for approximating how the rocks and sand would appear under daytime lighting conditions on Earth. https://photojournal.jpl.nasa.gov/catalog/PIA22065
2015-08-10
Bursts of pink and red, dark lanes of mottled cosmic dust, and a bright scattering of stars — this NASA/ESA Hubble Space Telescope image shows part of a messy barred spiral galaxy known as NGC 428. It lies approximately 48 million light-years away from Earth in the constellation of Cetus (The Sea Monster). Although a spiral shape is still just about visible in this close-up shot, overall NGC 428’s spiral structure appears to be quite distorted and warped, thought to be a result of a collision between two galaxies. There also appears to be a substantial amount of star formation occurring within NGC 428 — another telltale sign of a merger. When galaxies collide their clouds of gas can merge, creating intense shocks and hot pockets of gas and often triggering new waves of star formation. NGC 428 was discovered by William Herschel in December 1786. More recently a type Ia supernova designated SN2013ct was discovered within the galaxy by Stuart Parker of the BOSS (Backyard Observatory Supernova Search) project in Australia and New Zealand, although it is unfortunately not visible in this image. This image was captured by Hubble’s Advanced Camera for Surveys (ACS) and Wide Field and Planetary Camera 2 (WFPC2). A version of this image was entered into the Hubble’s Hidden Treasures Image Processing competition by contestants Nick Rose and the Flickr user penninecloud. Links: Nick Rose’s image on Flickr Penninecloud’s image on Flickr
2004-03-19
Bands and spots in Saturn's atmosphere, including a dark band south of the equator with a scalloped border, are visible in this image from the Cassini-Huygens spacecraft. The narrow angle camera took the image in blue light on Feb. 29, 2004. The distance to Saturn was 59.9 million kilometers (37.2 million miles). The image scale is 359 kilometers (223 miles) per pixel. Three of Saturn's moons are seen in the image: Enceladus (499 kilometers, or 310 miles across) at left; Mimas (398 kilometers, or 247 miles across) left of Saturn's south pole; and Rhea (1,528 kilometers, or 949 miles across) at lower right. The imaging team enhanced the brightness of the moons to aid visibility. The BL1 broadband spectral filter (centered at 451 nanometers) allows Cassini to "see" light in a part of the spectrum visible as the color blue to human eyes. Scientist can combine images made with this filter with those taken with red and green filters to create full-color composites. Scientists can also assess cloud heights by combining images from the blue filter with images taken in other spectral regions. For example, the bright clouds that form the equatorial zone are the highest in altitude and have pressures at their tops of about one quarter of Earth's atmospheric pressure at sea level. The cloud tops at middle latitudes are lower in altitude and have higher pressures of about half that found at sea level. Analysis of Saturn images like this one will be extremely useful to researchers assessing cloud altitudes during the Cassini-Huygens mission. http://photojournal.jpl.nasa.gov/catalog/PIA05383
VISTA Reveals the Secret of the Unicorn
NASA Astrophysics Data System (ADS)
2010-10-01
A new infrared image from ESO's VISTA survey telescope reveals an extraordinary landscape of glowing tendrils of gas, dark clouds and young stars within the constellation of Monoceros (the Unicorn). This star-forming region, known as Monoceros R2, is embedded within a huge dark cloud. The region is almost completely obscured by interstellar dust when viewed in visible light, but is spectacular in the infrared. An active stellar nursery lies hidden inside a massive dark cloud rich in molecules and dust in the constellation of Monoceros. Although it appears close in the sky to the more familiar Orion Nebula it is actually almost twice as far from Earth, at a distance of about 2700 light-years. In visible light a grouping of massive hot stars creates a beautiful collection of reflection nebulae where the bluish starlight is scattered from parts of the dark, foggy outer layers of the molecular cloud. However, most of the new-born massive stars remain hidden as the thick interstellar dust strongly absorbs their ultraviolet and visible light. In this gorgeous infrared image taken from ESO's Paranal Observatory in northern Chile, the Visible and Infrared Survey Telescope for Astronomy (VISTA [1], eso0949) penetrates the dark curtain of cosmic dust and reveals in astonishing detail the folds, loops and filaments sculpted from the dusty interstellar matter by intense particle winds and the radiation emitted by hot young stars. "When I first saw this image I just said 'Wow!' I was amazed to see all the dust streamers so clearly around the Monoceros R2 cluster, as well as the jets from highly embedded young stellar objects. There is such a great wealth of exciting detail revealed in these VISTA images," says Jim Emerson, of Queen Mary, University of London and leader of the VISTA consortium. With its huge field of view, large mirror and sensitive camera, VISTA is ideal for obtaining deep, high quality infrared images of large areas of the sky, such as the Monoceros R2 region. The width of VISTA's field of view is equivalent to about 80 light-years at this distance. Since the dust is largely transparent at infrared wavelengths, many young stars that cannot be seen in visible-light images become apparent. The most massive of these stars are less than ten million years old. The new image was created from exposures taken in three different parts of the near-infrared spectrum. In molecular clouds like Monoceros R2, the low temperatures and relatively high densities allow molecules to form, such as hydrogen, which under certain conditions emit strongly in the near infrared. Many of the pink and red structures that appear in the VISTA image are probably the glows from molecular hydrogen in outflows from young stars. Monoceros R2 has a dense core, no more than two light-years in extent, which is packed with very massive young stars, as well as a cluster of bright infrared sources, which are typically new-born massive stars still surrounded by dusty discs. This region lies at the centre of the image, where a much higher concentration of stars is visible on close inspection and where the prominent reddish features probably indicate emission from molecular hydrogen. The rightmost of the bright clouds in the centre of the picture is NGC 2170, the brightest reflection nebula in this region. In visible light, the nebulae appear as bright, light blue islands in a dark ocean, while in the infrared frenetic factories are revealed in their interiors where hundreds of massive stars are coming into existence. NGC 2170 is faintly visible through a small telescope and was discovered by William Herschel from England in 1784. Stars form in a process that typically lasts few million years and which takes place inside large clouds of interstellar gas and dust, hundreds of light-years across. Because the interstellar dust is opaque to visible light, infrared and radio observations are crucial in the understanding of the earliest stages of the stellar evolution. By mapping the southern sky systematically, VISTA will gather some 300 gigabytes per night, providing a huge amount of information on those regions that will be studied in greater detail by the Very Large Telescope (VLT), the Atacama Large Millimeter/submillimeter Array (ALMA) and, in the future, by the European Extremely Large Telescope (E-ELT). Notes [1] With its 4.1-metre primary mirror, VISTA is the largest survey telescope in the world and is equipped with the largest infrared camera on any telescope, with 67 million pixels. It is dedicated to sky surveys, which began early in 2010. Located on a peak next to Cerro Paranal, the home of the ESO VLT in northern Chile, VISTA shares the same exceptional observing conditions. Due to the remarkable quality of the sky in this area of the Atacama Desert, one of the driest sites on Earth, Cerro Armazones, located only 20 km away from Cerro Paranal, has been recently selected as the site for the future E-ELT. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
Safety Evaluation of Red Light Running Camera Intersections in Illinois
DOT National Transportation Integrated Search
2017-04-01
As a part of this research, the safety performance of red light running (RLR) camera systems was evaluated for a sample of 41 intersections and 60 RLR camera approaches located on state routes under IDOTs jurisdiction in the Chicago suburbs. Compr...
Development of electronic cinema projectors
NASA Astrophysics Data System (ADS)
Glenn, William E.
2001-03-01
All of the components for the electronic cinema are now commercially available. Sony has a high definition progressively scanned 24 frame per second electronic cinema camera. This can be recorded digitally on tape or film on hard drives in RAID recorders. Much of the post production processing is now done digitally by scanning film, processing it digitally, and recording it on film for release. Fiber links and satellites can transmit cinema program material to theaters in real time. RAID or tape recorders can play programs for viewing at a much lower cost than storage on film. Two companies now have electronic cinema projectors on the market. Of all of the components, the electronic cinema projector is the most challenging. Achieving the resolution, light, output, contrast ratio, and color rendition all at the same time without visible artifacts is a difficult task. Film itself is, of course, a form of light-valve. However, electronically modulated light uses other techniques rather than changes in density to control the light. The optical techniques that have been the basis for many electronic light-valves have been under development for over 100 years. Many of these techniques are based on optical diffraction to modulate the light. This paper will trace the history of these techniques and show how they may be extended to produce electronic cinema projectors in the future.
Rapid-Response Low Infrared Emission Broadband Ultrathin Plasmonic Light Absorber
Tagliabue, Giulia; Eghlidi, Hadi; Poulikakos, Dimos
2014-01-01
Plasmonic nanostructures can significantly advance broadband visible-light absorption, with absorber thicknesses in the sub-wavelength regime, much thinner than conventional broadband coatings. Such absorbers have inherently very small heat capacity, hence a very rapid response time, and high light power-to-temperature sensitivity. Additionally, their surface emissivity can be spectrally tuned to suppress infrared thermal radiation. These capabilities make plasmonic absorbers promising candidates for fast light-to-heat applications, such as radiation sensors. Here we investigate the light-to-heat conversion properties of a metal-insulator-metal broadband plasmonic absorber, fabricated as a free-standing membrane. Using a fast IR camera, we show that the transient response of the absorber has a characteristic time below 13 ms, nearly one order of magnitude lower than a similar membrane coated with a commercial black spray. Concurrently, despite the small thickness, due to the large absorption capability, the achieved absorbed light power-to-temperature sensitivity is maintained at the level of a standard black spray. Finally, we show that while black spray has emissivity similar to a black body, the plasmonic absorber features a very low infra-red emissivity of almost 0.16, demonstrating its capability as selective coating for applications with operating temperatures up to 400°C, above which the nano-structure starts to deform. PMID:25418040
Pinhole Cameras: For Science, Art, and Fun!
ERIC Educational Resources Information Center
Button, Clare
2007-01-01
A pinhole camera is a camera without a lens. A tiny hole replaces the lens, and light is allowed to come in for short amount of time by means of a hand-operated shutter. The pinhole allows only a very narrow beam of light to enter, which reduces confusion due to scattered light on the film. This results in an image that is focused, reversed, and…
Advances in Measurement of Skin Friction in Airflow
NASA Technical Reports Server (NTRS)
Brown, James L.; Naughton, Jonathan W.
2006-01-01
The surface interferometric skin-friction (SISF) measurement system is an instrument for determining the distribution of surface shear stress (skin friction) on a wind-tunnel model. The SISF system utilizes the established oil-film interference method, along with advanced image-data-processing techniques and mathematical models that express the relationship between interferograms and skin friction, to determine the distribution of skin friction over an observed region of the surface of a model during a single wind-tunnel test. In the oil-film interference method, a wind-tunnel model is coated with a thin film of oil of known viscosity and is illuminated with quasi-monochromatic, collimated light, typically from a mercury lamp. The light reflected from the outer surface of the oil film interferes with the light reflected from the oil-covered surface of the model. In the present version of the oil-film interference method, a camera captures an image of the illuminated model and the image in the camera is modulated by the interference pattern. The interference pattern depends on the oil-thickness distribution on the observed surface, and this distribution can be extracted through analysis of the image acquired by the camera. The oil-film technique is augmented by a tracer technique for observing the streamline pattern. To make the streamlines visible, small dots of fluorescentchalk/oil mixture are placed on the model just before a test. During the test, the chalk particles are embedded in the oil flow and produce chalk streaks that mark the streamlines. The instantaneous rate of thinning of the oil film at a given position on the surface of the model can be expressed as a function of the instantaneous thickness, the skin-friction distribution on the surface, and the streamline pattern on the surface; the functional relationship is expressed by a mathematical model that is nonlinear in the oil-film thickness and is known simply as the thin-oil-film equation. From the image data acquired as described, the time-dependent oil-thickness distribution and streamline pattern are extracted and by inversion of the thin-oil-film equation it is then possible to determine the skin-friction distribution. In addition to a quasi-monochromatic light source, the SISF system includes a beam splitter and two video cameras equipped with filters for observing the same area on a model in different wavelength ranges, plus a frame grabber and a computer for digitizing the video images and processing the image data. One video camera acquires the interference pattern in a narrow wavelength range of the quasi-monochromatic source. The other video camera acquires the streamline image of fluorescence from the chalk in a nearby but wider wavelength range. The interference- pattern and fluorescence images are digitized, and the resulting data are processed by an algorithm that inverts the thin-oil-film equation to find the skin-friction distribution.
Lunar UV-visible-IR mapping interferometric spectrometer
NASA Technical Reports Server (NTRS)
Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.
1992-01-01
Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.
STARING INTO THE WINDS OF DESTRUCTION: HST/NICMOS IMAGES OF THE PLANETARY NEBULA NGC 7027
NASA Technical Reports Server (NTRS)
2002-01-01
The Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has captured a glimpse of a brief stage in the burnout of NGC 7027, a medium-mass star like our sun. The infrared image (on the left) shows a young planetary nebula in a state of rapid transition. This image alone reveals important new information. When astronomers combine this photo with an earlier image taken in visible light, they have a more complete picture of the final stages of star life. NGC 7027 is going through spectacular death throes as it evolves into what astronomers call a 'planetary nebula.' The term planetary nebula came about not because of any real association with planets, but because in early telescopes these objects resembled the disks of planets. A star can become a planetary nebula after it depletes its nuclear fuel - hydrogen and helium - and begins puffing away layers of material. The material settles into a wind of gas and dust blowing away from the dying star. This NICMOS image captures the young planetary nebula in the middle of a very short evolutionary phase, lasting perhaps less than 1,000 years. During this phase, intense ultraviolet radiation from the central star lights up a region of gas surrounding it. (This gas is glowing brightly because it has been made very hot by the star's intense ultraviolet radiation.) Encircling this hot gas is a cloud of dust and cool molecular hydrogen gas that can only be seen by an infrared camera. The molecular gas is being destroyed by ultraviolet light from the central star. THE INFRARED VIEW -- The composite color image of NGC 7027 (on the left) is among the first data of a planetary nebula taken with NICMOS. This picture is actually composed of three separate images taken at different wavelengths. The red color represents cool molecular hydrogen gas, the most abundant gas in the universe. The image reveals the central star, which is difficult to see in images taken with visible light. Surrounding it is an elongated region of gas and dust cast off by the star. This gas (appearing as white) has a temperature of several tens of thousands of degrees Fahrenheit. The object has two 'cones' of cool molecular hydrogen gas (the red material) glowing in the infrared. The gas has been energized by ultraviolet light from the star - a process known as fluorescence. Most of the material shed by the star remains outside of the bright regions. It is invisible in this image because the layers of material in and near the bright regions are still shielding it from the central star's intense radiation. NGC 7027 is one of the smallest objects of its kind to be imaged by the Hubble telescope. However, the region seen here is approximately 14,000 times the average distance between Earth and the sun. THE INFRARED AND VISIBLE LIGHT VIEW -- This visible and infrared light picture of NGC 7027 (on the right) provides a more complete view of how this planetary nebula is being shaped, revealing steps in its evolution. This image is composed of three exposures, one from the Wide Field and Planetary Camera 2 (WFPC2) and two from NICMOS. The blue represents the WFPC2 image; the green and red, NICMOS exposures. The white is emission from the hot gas surrounding the central star; the red and pink represent emission from cool molecular hydrogen gas. In effect, the colors represent the three layers in the material ejected by the dying star. Each layer depicts a change in temperature, beginning with a hot, bright central region, continuing with a thin boundary zone where molecular hydrogen gas is glowing and being destroyed, and ending with a cool, blue outer region of molecular gas and dust. NICMOS has allowed astronomers to clearly see the transition layer from hot, glowing atomic gas to cold molecular gas. The origin of the newly seen filamentary structures is not yet understood. The transition region is clearly seen as the pink- and red-colored cool molecular hydrogen gas. An understanding of the atomic and chemical processes taking place in this transition region are of importance to other areas of astronomy as well, including star formation regions. WFPC2 is best used to study the hot, glowing gas, which is the bright, oval-shaped region surrounding the central star. With WFPC2 we also see material beyond this core with light from the central star that is reflecting off dust in the cold gas surrounding the nebula. Combining exposures from the two cameras allows astronomers to clearly see the way the nebula is being shaped by winds and radiation. This information will help astronomers understand the complexities of stellar evolution. NGC 7027 is located about 3,000 light-years from the sun in the direction of the constellation Cygnus the Swan. Credits: William B. Latter (SIRTF Science Center/Caltech) and NASA Other team investigators are: J. L. Hora (Smithsonian Astrophysical Observatory), J. H. Bieging (Steward Observatory), D. M. Kelly (University of Wyoming), A. Dayal (JPL/Caltech), A.G.G.M. Tielens (University of Groningen), and S. Trammell (University of North Carolina at Charlotte).
Visible-Light-Driven BiOI-Based Janus Micromotor in Pure Water.
Dong, Renfeng; Hu, Yan; Wu, Yefei; Gao, Wei; Ren, Biye; Wang, Qinglong; Cai, Yuepeng
2017-02-08
Light-driven synthetic micro-/nanomotors have attracted considerable attention due to their potential applications and unique performances such as remote motion control and adjustable velocity. Utilizing harmless and renewable visible light to supply energy for micro-/nanomotors in water represents a great challenge. In view of the outstanding photocatalytic performance of bismuth oxyiodide (BiOI), visible-light-driven BiOI-based Janus micromotors have been developed, which can be activated by a broad spectrum of light, including blue and green light. Such BiOI-based Janus micromotors can be propelled by photocatalytic reactions in pure water under environmentally friendly visible light without the addition of any other chemical fuels. The remote control of photocatalytic propulsion by modulating the power of visible light is characterized by velocity and mean-square displacement analysis of optical video recordings. In addition, the self-electrophoresis mechanism has been confirmed for such visible-light-driven BiOI-based Janus micromotors by demonstrating the effects of various coated layers (e.g., Al 2 O 3 , Pt, and Au) on the velocity of motors. The successful demonstration of visible-light-driven Janus micromotors holds a great promise for future biomedical and environmental applications.
HUBBLE SPIES A REALLY COOL STAR
NASA Technical Reports Server (NTRS)
2002-01-01
This is a Hubble Space Telescope picture of one of the least massive and coolest stars even seen (upper right). It is a diminutive companion to the K dwarf star called GL 105A (also known as HD 16160) seen at lower left. The binary pair is located 27 light-years away in the constellation Cetus. Based on the Hubble observation, astronomers calculate that the companion, called GL 105C, is 25,000 times fainter than GL 105A in visible light. If the dim companion were at the distance of our Sun, it would be only four times brighter than the full moon. The Hubble observations confirm the detection of GL 105C last year by David Golimowski and his collaborators at Palomar Observatory in California. Although GL 105C was identified before, the Hubble view allows a more precise measurement of the separation between the binary components. Future Hubble observations of the binary orbit will allow the masses of both stars to be determined accurately. The Palomar group estimates that the companion's mass is 8-9 percent of the Sun's mass, which places it near the theoretical lower limit for stable hydrogen burning. Objects below this limit, called brown dwarfs, still 'shine' -- not by thermonuclear energy, but by the energy released through gravitational contraction. Two pictures, taken with Hubble's Wide Field Planetary Camera 2 (in PC mode) through different filters (in visible and near-infrared light) show that GL 105C is redder, hence cooler than GL 105A. The surface temperature of GL 105C is not precisely known, but may be as low as 2,600 degrees Kelvin (4,200 degrees Fahrenheit). This image was taken in near-infrared light, on January 5, 1995. GL 105C is located 3.4 arc seconds to the west-northwest of the larger GL 105A. (One arc second equals 1/3600 of a degree.) The bright spikes are caused by diffraction of light within the telescope's optical system, and the brighter white bar is an artifact of the CCD camera, which bleeds along a CCD column when a relatively bright object is in the field of view. These observations are part of a Guaranteed Time Observing Program for which William Fastie (the Johns Hopkins University, Baltimore, MD) and Dan Schroeder (Beloit College, Beloit, WI) were co-principal investigators. Credit: D. Golimowski (Johns Hopkins University), and NASA Image files in GIF and JPEG format may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo:
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
Kim, Yeo Jin; Kim, Hyoung-June; Kim, Hye Lim; Kim, Hyo Jeong; Kim, Hyun Soo; Lee, Tae Ryong; Shin, Dong Wook; Seo, Young Rok
2017-02-01
The phototherapeutic effects of visible red light on skin have been extensively investigated, but the underlying biological mechanisms remain poorly understood. We aimed to elucidate the protective mechanism of visible red light in terms of DNA repair of UV-induced oxidative damage in normal human dermal fibroblasts. The protective effect of visible red light on UV-induced DNA damage was identified by several assays in both two-dimensional and three-dimensional cell culture systems. With regard to the protective mechanism of visible red light, our data showed alterations in base excision repair mediated by growth arrest and DNA damage inducible, alpha (GADD45A). We also observed an enhancement of the physical activity of GADD45A and apurinic/apyrimidinic endonuclease 1 (APE1) by visible red light. Moreover, UV-induced DNA damages were diminished by visible red light in an APE1-dependent manner. On the basis of the decrease in GADD45A-APE1 interaction in the activating transcription factor-2 (ATF2)-knockdown system, we suggest a role for ATF2 modulation in GADD45A-mediated DNA repair upon visible red light exposure. Thus, the enhancement of GADD45A-mediated base excision repair modulated by ATF2 might be a potential protective mechanism of visible red light. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Eta Carinae Starforming RegionSimulated Infrared View of Comet Tempel 1 (artist's concept) These false-color image taken by NASA's Spitzer Space Telescope shows the 'South Pillar' region of the star-forming region called the Carina Nebula. Like cracking open a watermelon and finding its seeds, the infrared telescope 'busted open' this murky cloud to reveal star embryos (yellow or white) tucked inside finger-like pillars of thick dust (pink). Hot gases are green and foreground stars are blue. Not all of the newfound star embryos can be easily spotted. Though the nebula's most famous and massive star, Eta Carinae, is too bright to be observed by infrared telescopes, the downward-streaming rays hint at its presence above the picture frame. Ultraviolet radiation and stellar winds from Eta Carinae and its siblings have shredded the cloud to pieces, leaving a mess of tendrils and pillars. This shredding process triggered the birth of the new stars uncovered by Spitzer. The inset visible-light picture (figure 2) of the Carina Nebula shows quite a different view. Dust pillars are fewer and appear dark because the dust is soaking up visible light. Spitzer's infrared detectors cut through this dust, allowing it to see the heat from warm, embedded star embryos, as well as deeper, more buried pillars. The visible-light picture is from the National Optical Astronomy Observatory. Eta Carina is a behemoth of a star, with more than 100 times the mass of our Sun. It is so massive that it can barely hold itself together. Over the years, it has brightened and faded as material has shot away from its surface. Some astronomers think Eta Carinae might die in a supernova blast within our lifetime. Eta Carina's home, the Carina Nebula, is located in the southern portion of our Milky Way galaxy, 10,000 light-years from Earth. This colossal cloud of gas and dust stretches across 200 light-years of space. Though it is dominated by Eta Carinae, it also houses the star's slightly less massive siblings, in addition to the younger generations of stars. This image was taken by the infrared array camera on Spitzer. It is a three-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange), and 8.0 microns (red). The movie begins with a visible-light picture of the southern region of our Milky Way galaxy then slowly zooms into the area imaged by Spitzer.Clouds Sailing Overhead on Mars, Enhanced
2017-08-09
Wispy clouds float across the Martian sky in this accelerated sequence of enhanced images from NASA's Curiosity Mars rover. The rover's Navigation Camera (Navcam) took these eight images over a span of four minutes early in the morning of the mission's 1,758th Martian day, or sol (July 17, 2017), aiming nearly straight overhead. They have been processed by first making a "flat field' adjustment for known differences in sensitivity among pixels and correcting for camera artifacts due to light reflecting within the camera, and then generating an "average" of all the frames and subtracting that average from each frame. This subtraction results in emphasizing any changes due to movement or lighting. The clouds are also visible, though fainter, in a raw image sequence from these same observations. On the same Martian morning, Curiosity also observed clouds near the southern horizon. The clouds resemble Earth's cirrus clouds, which are ice crystals at high altitudes. These Martian clouds are likely composed of crystals of water ice that condense onto dust grains in the cold Martian atmosphere. Cirrus wisps appear as ice crystals fall and evaporate in patterns known as "fall streaks" or "mare's tails." Such patterns have been seen before at high latitudes on Mars, for instance by the Phoenix Mars Lander in 2008, and seasonally nearer the equator, for instance by the Opportunity rover. However, Curiosity has not previously observed such clouds so clearly visible from the rover's study area about five degrees south of the equator. The Hubble Space Telescope and spacecraft orbiting Mars have observed a band of clouds to appear near the Martian equator around the time of the Martian year when the planet is farthest from the Sun. With a more elliptical orbit than Earth's, Mars experiences more annual variation than Earth in its distance from the Sun. The most distant point in an orbit around the Sun is called the aphelion. The near-equatorial Martian cloud pattern observed at that time of year is called the "aphelion cloud belt." These new images from Curiosity were taken about two months before aphelion, but the morning clouds observed may be an early stage of the aphelion cloud belt. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21841
Reasoning About Visibility in Mirrors: A Comparison Between a Human Observer and a Camera.
Bertamini, Marco; Soranzo, Alessandro
2018-01-01
Human observers make errors when predicting what is visible in a mirror. This is true for perception with real mirrors as well as for reasoning about mirrors shown in diagrams. We created an illustration of a room, a top-down view, with a mirror on a wall and objects (nails) on the opposite wall. The task was to select which nails were visible in the mirror from a given position (viewpoint). To study the importance of the social nature of the viewpoint, we divided the sample ( N = 108) in two groups. One group ( n = 54) were tested with a scene in which there was the image of a person. The other group ( n = 54) were tested with the same scene but with a camera replacing the person. Participants were instructed to think about what would be captured by a camera on a tripod. This manipulation tests the effect of social perspective-taking in reasoning about mirrors. As predicted, performance on the task shows an overestimation of what can be seen in a mirror and a bias to underestimate the role of the different viewpoints, that is, a tendency to treat the mirror as if it captures information independently of viewpoint. In terms of the comparison between person and camera, there were more errors for the camera, suggesting an advantage for evaluating a human viewpoint as opposed to an artificial viewpoint. We suggest that social mechanisms may be involved in perspective-taking in reasoning rather than in automatic attention allocation.
Highly Transparent, Visible-Light Photodetector Based on Oxide Semiconductors and Quantum Dots.
Shin, Seung Won; Lee, Kwang-Ho; Park, Jin-Seong; Kang, Seong Jun
2015-09-09
Highly transparent phototransistors that can detect visible light have been fabricated by combining indium-gallium-zinc oxide (IGZO) and quantum dots (QDs). A wide-band-gap IGZO film was used as a transparent semiconducting channel, while small-band-gap QDs were adopted to absorb and convert visible light to an electrical signal. Typical IGZO thin-film transistors (TFTs) did not show a photocurrent with illumination of visible light. However, IGZO TFTs decorated with QDs showed enhanced photocurrent upon exposure to visible light. The device showed a responsivity of 1.35×10(4) A/W and an external quantum efficiency of 2.59×10(4) under illumination by a 635 nm laser. The origin of the increased photocurrent in the visible light was the small band gap of the QDs combined with the transparent IGZO films. Therefore, transparent phototransistors based on IGZO and QDs were fabricated and characterized in detail. The result is relevant for the development of highly transparent photodetectors that can detect visible light.
NASA Technical Reports Server (NTRS)
2000-01-01
MISR images of tropical northern Australia acquired on June 1, 2000 (Terra orbit 2413) during the long dry season. Left: color composite of vertical (nadir) camera blue, green, and red band data. Right: multi-angle composite of red band data only from the cameras viewing 60 degrees aft, 60 degrees forward, and nadir. Color and contrast have been enhanced to accentuate subtle details. In the left image, color variations indicate how different parts of the scene reflect light differently at blue, green, and red wavelengths; in the right image color variations show how these same scene elements reflect light differently at different angles of view. Water appears in blue shades in the right image, for example, because glitter makes the water look brighter at the aft camera's view angle. The prominent inland water body is Lake Argyle, the largest human-made lake in Australia, which supplies water for the Ord River Irrigation Area and the town of Kununurra (pop. 6500) just to the north. At the top is the southern edge of Joseph Bonaparte Gulf; the major inlet at the left is Cambridge Gulf, the location of the town of Wyndham (pop. 850), the port for this region. This area is sparsely populated, and is known for its remote, spectacular mountains and gorges. Visible along much of the coastline are intertidal mudflats of mangroves and low shrubs; to the south the terrain is covered by open woodland merging into open grassland in the lower half of the pictures.
MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathore, Kavita, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Munshi, Prabhat, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Bhattacharjee, Sudeep, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in
A new non-invasive diagnostic system is developed for Microwave Induced Plasma (MIP) to reconstruct tomographic images of a 2D emission profile. A compact MIP system has wide application in industry as well as research application such as thrusters for space propulsion, high current ion beams, and creation of negative ions for heating of fusion plasma. Emission profile depends on two crucial parameters, namely, the electron temperature and density (over the entire spatial extent) of the plasma system. Emission tomography provides basic understanding of plasmas and it is very useful to monitor internal structure of plasma phenomena without disturbing its actualmore » processes. This paper presents development of a compact, modular, and versatile Optical Emission Tomography (OET) tool for a cylindrical, magnetically confined MIP system. It has eight slit-hole cameras and each consisting of a complementary metal–oxide–semiconductor linear image sensor for light detection. The optical noise is reduced by using aspheric lens and interference band-pass filters in each camera. The entire cylindrical plasma can be scanned with automated sliding ring mechanism arranged in fan-beam data collection geometry. The design of the camera includes a unique possibility to incorporate different filters to get the particular wavelength light from the plasma. This OET system includes selected band-pass filters for particular argon emission 750 nm, 772 nm, and 811 nm lines and hydrogen emission H{sub α} (656 nm) and H{sub β} (486 nm) lines. Convolution back projection algorithm is used to obtain the tomographic images of plasma emission line. The paper mainly focuses on (a) design of OET system in detail and (b) study of emission profile for 750 nm argon emission lines to validate the system design.« less
Edge Turbulence Imaging in Alcator C-Mod
NASA Astrophysics Data System (ADS)
Zweben, Stewart J.
2001-10-01
This talk will describe measurements and modeling of the 2-D structure of edge turbulence in Alcator C-Mod. The radial vs. poloidal structure was measured using Gas Puff Imaging (GPI) (R. Maqueda et al, RSI 72, 931 (2001), J. Terry et al, J. Nucl. Materials 290-293, 757 (2001)), in which the visible light emitted by an edge neutral gas puff (generally D or He) is viewed along the local magnetic field by a fast-gated video camera. Strong fluctuations are observed in the gas cloud light emission when the camera is gated at ~2 microsec exposure time per frame. The structure of these fluctuations is highly turbulent with a typical radial and poloidal scale of ≈1 cm, and often with local maxima in the scrape-off layer (i.e. ``blobs"). Video clips and analyses of these images will be presented along with their variation in different plasma regimes. The local time dependence of edge turbulence is measured using high-speed photodiodes viewing the gas puff emission, a scanning Langmuir probe, and also with a Princeton Scientific Instruments ultra-fast framing camera, which can make 2-D images the gas puff at up to 200,000 frames/sec. Probe measurements show that the strong turbulence region moves to the separatrix as the density limit is approached, which may be connected to the density limit (B. LaBombard et al., Phys. Plasmas 8 2107 (2001)). Comparisons of this C-Mod turbulence data will be made with results of simulations from the Drift-Ballooning Mode (DBM) (B.N. Rogers et al, Phys. Rev. Lett. 20 4396 (1998))and Non-local Edge Turbulence (NLET) codes.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Skupsch, C; Chaves, H; Brücker, C
2011-08-01
The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.
Efficient resource allocation scheme for visible-light communication system
NASA Astrophysics Data System (ADS)
Kim, Woo-Chan; Bae, Chi-Sung; Cho, Dong-Ho; Shin, Hong-Seok; Jung, D. K.; Oh, Y. J.
2009-01-01
A visible-light communication utilizing LED has many advantagies such as visibility of information, high SNR (Signal to Noise Ratio), low installation cost, usage of existing illuminators, and high security. Furthermore, exponentially increasing needs and quality of LED have helped the development of visible-light communication. The visibility is the most attractive property in visible-light communication system, but it is difficult to ensure visibility and transmission efficiency simultaneously during initial access because of the small amount of initial access process signals. In this paper, we propose an efficient resource allocation scheme at initial access for ensuring visibility with high resource utilization rate and low data transmission failure rate. The performance has been evaluated through the numerical analysis and simulation results.
3D shape measurement with thermal pattern projection
NASA Astrophysics Data System (ADS)
Brahm, Anika; Reetz, Edgar; Schindwolf, Simon; Correns, Martin; Kühmstedt, Peter; Notni, Gunther
2016-12-01
Structured light projection techniques are well-established optical methods for contactless and nondestructive three-dimensional (3D) measurements. Most systems operate in the visible wavelength range (VIS) due to commercially available projection and detection technology. For example, the 3D reconstruction can be done with a stereo-vision setup by finding corresponding pixels in both cameras followed by triangulation. Problems occur, if the properties of object materials disturb the measurements, which are based on the measurement of diffuse light reflections. For example, there are existing materials in the VIS range that are too transparent, translucent, high absorbent, or reflective and cannot be recorded properly. To overcome these challenges, we present an alternative thermal approach that operates in the infrared (IR) region of the electromagnetic spectrum. For this purpose, we used two cooled mid-wave (MWIR) cameras (3-5 μm) to detect emitted heat patterns, which were introduced by a CO2 laser. We present a thermal 3D system based on a GOBO (GOes Before Optics) wheel projection unit and first 3D analyses for different system parameters and samples. We also show a second alternative approach based on an incoherent (heat) source, to overcome typical disadvantages of high-power laser-based systems, such as industrial health and safety considerations, as well as high investment costs. Thus, materials like glass or fiber-reinforced composites can be measured contactless and without the need of additional paintings.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
2002-04-01
This picture of the galaxy UGC 10214 was was taken by the Advanced Camera for Surveys (ACS), which was installed aboard the Hubble Space Telescope (HST) in March 2002 during HST Servicing Mission 3B (STS-109 mission). Dubbed the "Tadpole," this spiral galaxy is unlike the textbook images of stately galaxies. Its distorted shape was caused by a small interloper, a very blue, compact galaxy visible in the upper left corner of the more massive Tadpole. The Tadpole resides about 420 million light-years away in the constellation Draco. Seen shining through the Tadpole's disk, the tiny intruder is likely a hit-and-run galaxy that is now leaving the scene of the accident. Strong gravitational forces from the interaction created the long tail of debris, consisting of stars and gas that stretch our more than 280,000 light-years. The galactic carnage and torrent of star birth are playing out against a spectacular backdrop: a "wallpaper pattern" of 6,000 galaxies. These galaxies represent twice the number of those discovered in the legendary Hubble Deep Field, the orbiting observatory's "deepest" view of the heavens, taken in 1995 by the Wide Field and planetary camera 2. The ACS picture, however, was taken in one-twelfth of the time it took to observe the original HST Deep Field. In blue light, ACS sees even fainter objects than were seen in the "deep field." The galaxies in the ACS picture, like those in the deep field, stretch back to nearly the begirning of time. Credit: NASA, H. Ford (JHU), G. Illingworth (USCS/LO), M. Clampin (STScI), G. Hartig (STScI), the ACS Science Team, and ESA.
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
PhenoCam Dataset v1.0: Vegetation Phenology from Digital Camera Imagery, 2000-2015
USDA-ARS?s Scientific Manuscript database
This data set provides a time series of vegetation phenological observations for 133 sites across diverse ecosystems of North America and Europe from 2000-2015. The phenology data were derived from conventional visible-wavelength automated digital camera imagery collected through the PhenoCam Networ...
A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.
Shen, Bailey Y; Mukai, Shizuo
2017-01-01
Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.
A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer
Shen, Bailey Y.
2017-01-01
Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient. PMID:28396802
DOT National Transportation Integrated Search
2006-03-01
This report presents results from an analysis of about 47,000 red light violation records collected from 11 intersections in the : City of Sacramento, California, by red light photo enforcement cameras between May 1999 and June 2003. The goal of this...
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
NASA Astrophysics Data System (ADS)
Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin
2015-07-01
Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.
Biological applications of an LCoS-based programmable array microscope (PAM)
NASA Astrophysics Data System (ADS)
Hagen, Guy M.; Caarls, Wouter; Thomas, Martin; Hill, Andrew; Lidke, Keith A.; Rieger, Bernd; Fritsch, Cornelia; van Geest, Bert; Jovin, Thomas M.; Arndt-Jovin, Donna J.
2007-02-01
We report on a new generation, commercial prototype of a programmable array optical sectioning fluorescence microscope (PAM) for rapid, light efficient 3D imaging of living specimens. The stand-alone module, including light source(s) and detector(s), features an innovative optical design and a ferroelectric liquid-crystal-on-silicon (LCoS) spatial light modulator (SLM) instead of the DMD used in the original PAM design. The LCoS PAM (developed in collaboration with Cairn Research, Ltd.) can be attached to a port of a(ny) unmodified fluorescence microscope. The prototype system currently operated at the Max Planck Institute incorporates a 6-position high-intensity LED illuminator, modulated laser and lamp light sources, and an Andor iXon emCCD camera. The module is mounted on an Olympus IX71 inverted microscope with 60-150X objectives with a Prior Scientific x,y, and z high resolution scanning stages. Further enhancements recently include: (i) point- and line-wise spectral resolution and (ii) lifetime imaging (FLIM) in the frequency domain. Multiphoton operation and other nonlinear techniques should be feasible. The capabilities of the PAM are illustrated by several examples demonstrating single molecule as well as lifetime imaging in live cells, and the unique capability to perform photoconversion with arbitrary patterns and high spatial resolution. Using quantum dot coupled ligands we show real-time binding and subsequent trafficking of individual ligand-growth factor receptor complexes on and in live cells with a temporal resolution and sensitivity exceeding those of conventional CLSM systems. The combined use of a blue laser and parallel LED or visible laser sources permits photoactivation and rapid kinetic analysis of cellular processes probed by photoswitchable visible fluorescent proteins such as DRONPA.
Photometric Assessment of Night Sky Quality over Chaco Culture National Historical Park
NASA Astrophysics Data System (ADS)
Hung, Li-Wei; Duriscoe, Dan M.; White, Jeremy M.; Meadows, Bob; Anderson, Sharolyn J.
2018-06-01
The US National Park Service (NPS) characterizes night sky conditions over Chaco Culture National Historical Park using measurements in the park and satellite data. The park is located near the geographic center of the San Juan Basin of northwestern New Mexico and the adjacent Four Corners state. In the park, we capture a series of night sky images in V-band using our mobile camera system on nine nights from 2001 to 2016 at four sites. We perform absolute photometric calibration and determine the image placement to obtain multiple 45-million-pixel mosaic images of the entire night sky. We also model the regional night sky conditions in and around the park based on 2016 VIIRS satellite data. The average zenith brightness is 21.5 mag/arcsec2, and the whole sky is only ~16% brighter than the natural conditions. The faintest stars visible to naked eyes have magnitude of approximately 7.0, reaching the sensitivity limit of human eyes. The main impacts to Chaco’s night sky quality are the light domes from Albuquerque, Rio Rancho, Farmington, Bloomfield, Gallup, Santa Fe, Grants, and Crown Point. A few of these light domes exceed the natural brightness of the Milky Way. Additionally, glare sources from oil and gas development sites are visible along the north and east horizons. Overall, the night sky quality at Chaco Culture National Historical Park is very good. The park preserves to a large extent the natural illumination cycles, providing a refuge for crepuscular and nocturnal species. During clear and dark nights, visitors have an opportunity to see the Milky Way from nearly horizon to horizon, complete constellations, and faint astronomical objects and natural sources of light such as the Andromeda Galaxy, zodiacal light, and airglow.
NASA Technical Reports Server (NTRS)
2004-01-01
In the quest to better understand the birth of stars and the formation of new worlds, astronomers have used NASA's Spitzer Space Telescope to examine the massive stars contained in a cloudy region called Sharpless 140. This cloud is a fascinating microcosm of a star-forming region since it exhibits, within a relatively small area, all of the classic manifestations of stellar birth. Sharpless 140 lies almost 3000 light-years from Earth in the constellation Cepheus. At its heart is a cluster of three deeply embedded young stars, which are each several thousand times brighter than the Sun. Though they are strikingly visible in this image from Spitzer's infrared array camera, they are completely obscured in visible light, buried within the core of the surrounding dust cloud. The extreme youth of at least one of these stars is indicated by the presence of a stream of gas moving at high velocities. Such outflows are signatures of the processes surrounding a star that is still gobbling up material as part of its formation. The bright red bowl, or arc, seen in this image traces the outer surface of the dense dust cloud encasing the young stars. This arc is made up primarily of organic compounds called polycyclic aromatic hydrocarbons, which glow on the surface of the cloud. Ultraviolet light from a nearby bright star outside of the image is 'eating away' at these molecules. Eventually, this light will destroy the dust envelope and the masked young stars will emerge. This false-color image was taken on Oct. 11, 2003 and is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red).2004-05-11
In the quest to better understand the birth of stars and the formation of new worlds, astronomers have used NASA's Spitzer Space Telescope to examine the massive stars contained in a cloudy region called Sharpless 140. This cloud is a fascinating microcosm of a star-forming region since it exhibits, within a relatively small area, all of the classic manifestations of stellar birth. Sharpless 140 lies almost 3000 light-years from Earth in the constellation Cepheus. At its heart is a cluster of three deeply embedded young stars, which are each several thousand times brighter than the Sun. Though they are strikingly visible in this image from Spitzer's infrared array camera, they are completely obscured in visible light, buried within the core of the surrounding dust cloud. The extreme youth of at least one of these stars is indicated by the presence of a stream of gas moving at high velocities. Such outflows are signatures of the processes surrounding a star that is still gobbling up material as part of its formation. The bright red bowl, or arc, seen in this image traces the outer surface of the dense dust cloud encasing the young stars. This arc is made up primarily of organic compounds called polycyclic aromatic hydrocarbons, which glow on the surface of the cloud. Ultraviolet light from a nearby bright star outside of the image is "eating away" at these molecules. Eventually, this light will destroy the dust envelope and the masked young stars will emerge. This false-color image was taken on Oct. 11, 2003 and is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red). http://photojournal.jpl.nasa.gov/catalog/PIA05878
2004-10-07
Four hundred years ago, sky watchers, including the famous astronomer Johannes Kepler, best known as the discoverer of the laws of planetary motion, were startled by the sudden appearance of a new star in the western sky, rivaling the brilliance of the nearby planets. Modern astronomers, using NASA's three orbiting Great Observatories, are unraveling the mysteries of the expanding remains of Kepler's supernova, the last such object seen to explode in our Milky Way galaxy. When a new star appeared Oct. 9, 1604, observers could use only their eyes to study it. The telescope would not be invented for another four years. A team of modern astronomers has the combined abilities of NASA's Great Observatories, the Spritzer Space Telescope (SST), Hubble Space Telescope (HST), and Chandra X-Ray Observatory (CXO), to analyze the remains in infrared radiation, visible light, and X-rays. Visible-light images from Hubble's Advanced Camera for Surveys reveal where the supernova shock wave is slamming into the densest regions of surrounding gas. The astronomers used the SST to probe for material that radiates in infrared light, which shows heated microscopic dust particles that have been swept up by the supernova shock wave. The CXO data show regions of very hot gas. The combined image unveils a bubble-shaped shroud of gas and dust, 14 light-years wide and expanding at 4 million mph. There have been six known supernovas in our Milky Way over the past 1,000 years. Kepler's is the only one in which astronomers do not know what type of star exploded. By combining information from all three Great Observatories, astronomers may find the clues they need. Project management for both the HST and CXO programs is the responsibility of NASA’s Marshall Space Flight Center in Huntsville, Alabama.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Arakawa, Takahiro; Sato, Toshiyuki; Iitani, Kenta; Toma, Koji; Mitsubayashi, Kohji
2017-04-18
Various volatile organic compounds can be found in human transpiration, breath and body odor. In this paper, a novel two-dimensional fluorometric imaging system, known as a "sniffer-cam" for ethanol vapor released from human breath and palm skin was constructed and validated. This imaging system measures ethanol vapor concentrations as intensities of fluorescence through an enzymatic reaction induced by alcohol dehydrogenase (ADH). The imaging system consisted of multiple ultraviolet light emitting diode (UV-LED) excitation sheet, an ADH enzyme immobilized mesh substrate and a high-sensitive CCD camera. This imaging system uses ADH for recognition of ethanol vapor. It measures ethanol vapor by measuring fluorescence of nicotinamide adenine dinucleotide (NADH), which is produced by an enzymatic reaction on the mesh. This NADH fluorometric imaging system achieved the two-dimensional real-time imaging of ethanol vapor distribution (0.5-200 ppm). The system showed rapid and accurate responses and a visible measurement, which could lead to an analysis of metabolism function at real time in the near future.
First photometric properties of Dome C, Antarctica
NASA Astrophysics Data System (ADS)
Chadid, M.; Vernin, J.; Jeanneaux, F.; Mekarnia, D.; Trinquet, H.
2008-07-01
Here we present the first photometric extinction measurements in the visible range performed at Dome C in Antarctica, using PAIX photometer (Photometer AntarctIca eXtinction). It is made with "off the shelf" components, Audine camera at the focus of Blazhko telescope, a Meade M16 diaphragmed down to 15 cm. For an exposure time of 60 s without filter, a 10th V-magnitude star is measured with a precision of 1/100 mag. A first statistics over 16 nights in August 2007 leads to a 0.5 magnitude per air mass extinction, may be due to high altitude cirrus. This rather simple experiment shows that continuous observations can be performed at Dome C, allowing high frequency resolution on pulsation and asteroseismology studies. Light curves of one of RR Lyrae stars: SAra were established. They show the typical trend of a RRLyrae star. A recent sophisticated photometer, PAIX II, has been installed recently at Dome C during polar summer 2008, with a ST10 XME camera, automatic guiding, auto focusing and Johnson/Bessel UBVRI filter wheels.
NASA Astrophysics Data System (ADS)
Egli, Pascal; Mankoff, Ken; Mettra, François; Lane, Stuart
2017-04-01
This study investigates the application of feature tracking algorithms to monitoring of glacier uplift. Several publications have confirmed the occurrence of an uplift of the glacier surface in the late morning hours of the mid to late ablation season. This uplift is thought to be caused by high sub-glacial water pressures at the onset of melt caused by overnight-deposited sediment that blocks subglacial channels. We use time-lapse images from a camera mounted in front of the glacier tongue of Haut Glacier d'Arolla during August 2016 in combination with a Digital Elevation Model and GPS measurements in order to investigate the phenomenon of glacier uplift using the feature tracking toolbox ImGRAFT. Camera position is corrected for all images and the images are geo-rectified using Ground Control Points visible in every image. Changing lighting conditions due to different sun angles create substantial noise and complicate the image analysis. A small glacier uplift of the order of 5 cm over a time span of 3 hours may be observed on certain days, confirming previous research.
Stephey, L.; Wurden, G. A.; Schmitz, O.; ...
2016-08-08
A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Short-Wavelength Infrared Views of Messier 81
NASA Technical Reports Server (NTRS)
2003-01-01
The magnificent spiral arms of the nearby galaxy Messier 81 are highlighted in this NASA Spitzer Space Telescope image. Located in the northern constellation of Ursa Major (which also includes the Big Dipper), this galaxy is easily visible through binoculars or a small telescope. M81 is located at a distance of 12 million light-years from Earth.Because of its proximity, M81 provides astronomers with an enticing opportunity to study the anatomy of a spiral galaxy in detail. The unprecedented spatial resolution and sensitivity of Spitzer at infrared wavelengths show a clear separation between the several key constituents of the galaxy: the old stars, the interstellar dust heated by star formation activity, and the embedded sites of massive star formation. The infrared images also permit quantitative measurements of the galaxy's overall dust content, as well as the rate at which new stars are being formed.The infrared image was obtained by Spitzer's infrared array camera. It is a four-color composite of invisible light, showing emissions from wavelengths of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (yellow) and 8.0 microns (red). Winding outward from the bluish-white central bulge of the galaxy, where old stars predominate and there is little dust, the grand spiral arms are dominated by infrared emission from dust. Dust in the galaxy is bathed by ultraviolet and visible light from the surrounding stars. Upon absorbing an ultraviolet or visible-light photon, a dust grain is heated and re-emits the energy at longer infrared wavelengths. The dust particles, composed of silicates (which are chemically similar to beach sand) and polycyclic aromatic hydrocarbons, trace the gas distribution in the galaxy. The well-mixed gas (which is best detected at radio wavelengths) and dust provide a reservoir of raw materials for future star formation.The infrared-bright clumpy knots within the spiral arms denote where massive stars are being born in giant H II (ionized hydrogen) regions. The 8-micron emission traces the regions of active star formation in the galaxy. Studying the locations of these regions with respect to the overall mass distribution and other constituents of the galaxy (e.g., gas) will help identify the conditions and processes needed for star formation. With the Spitzer observations, this information comes to us without complications from absorption by cold dust in the galaxy, which makes interpretation of visible-light features uncertain.The white stars scattered throughout the field of view are foreground stars within our own Milky Way galaxy.A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
Field trials for determining the visible and infrared transmittance of screening smoke
NASA Astrophysics Data System (ADS)
Sánchez Oliveros, Carmen; Santa-María Sánchez, Guillermo; Rosique Pérez, Carlos
2009-09-01
In order to evaluate the concealment capability of smoke, the Countermeasures Laboratory of the Institute of Technology "Marañosa" (ITM) has done a set of tests for measuring the transmittances of multispectral smoke tins in several bands of the electromagnetic spectrum. The smoke composition based on red phosphorous has been developed and patented by this laboratory as a part of a projectile development. The smoke transmittance was measured by means of thermography as well as spectroradiometry. Black bodies and halogen lamps were used as infrared and visible source of radiation. The measurements were carried out in June of 2008 at the Marañosa field (Spain) with two MWIR cameras, two LWIR cameras, one CCD visible camera, one CVF IR spectroradiometer covering the interval 1.5 to 14 microns and one array silicon based spectroradiometer for the 0.2 to 1.1 μm spectra. The transmittance and dimensions of the smoke screen were characterized in the visible band, MWIR (3 - 5 μm and LWIR (8 - 12 μm) regions. The size of the screen was about 30 meters wide and 5 meters high. The transmittances in the IR bands were about 0.3 and better than 0.1 in the visible one. The screens showed to be effective over the time of persistence for all of the tests. The results obtained from the imaging and non-imaging systems were in good accordance. The meteorological conditions during tests such as the wind speed are determinant for the use of this kind of optical countermeasures.
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
Visible light reduces C. elegans longevity.
De Magalhaes Filho, C Daniel; Henriquez, Brian; Seah, Nicole E; Evans, Ronald M; Lapierre, Louis R; Dillin, Andrew
2018-03-02
The transparent nematode Caenorhabditis elegans can sense UV and blue-violet light to alter behavior. Because high-dose UV and blue-violet light are not a common feature outside of the laboratory setting, we asked what role, if any, could low-intensity visible light play in C. elegans physiology and longevity. Here, we show that C. elegans lifespan is inversely correlated to the time worms were exposed to visible light. While circadian control, lite-1 and tax-2 do not contribute to the lifespan reduction, we demonstrate that visible light creates photooxidative stress along with a general unfolded-protein response that decreases the lifespan. Finally, we find that long-lived mutants are more resistant to light stress, as well as wild-type worms supplemented pharmacologically with antioxidants. This study reveals that transparent nematodes are sensitive to visible light radiation and highlights the need to standardize methods for controlling the unrecognized biased effect of light during lifespan studies in laboratory conditions.
Visible light alters yeast metabolic rhythms by inhibiting respiration.
Robertson, James Brian; Davis, Chris R; Johnson, Carl Hirschie
2013-12-24
Exposure of cells to visible light in nature or in fluorescence microscopy often is considered to be relatively innocuous. However, using the yeast respiratory oscillation (YRO) as a sensitive measurement of metabolism, we find that non-UV visible light has a significant impact on yeast metabolism. Blue/green wavelengths of visible light shorten the period and dampen the amplitude of the YRO, which is an ultradian rhythm of cell metabolism and transcription. The wavelengths of light that have the greatest effect coincide with the peak absorption regions of cytochromes. Moreover, treating yeast with the electron transport inhibitor sodium azide has similar effects on the YRO as visible light. Because impairment of respiration by light would change several state variables believed to play vital roles in the YRO (e.g., oxygen tension and ATP levels), we tested oxygen's role in YRO stability and found that externally induced oxygen depletion can reset the phase of the oscillation, demonstrating that respiratory capacity plays a role in the oscillation's period and phase. Light-induced damage to the cytochromes also produces reactive oxygen species that up-regulate the oxidative stress response gene TRX2 that is involved in pathways that enable sustained growth in bright visible light. Therefore, visible light can modulate cellular rhythmicity and metabolism through unexpectedly photosensitive pathways.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Soleymani, Teo; Soter, Nicholas A; Folan, Lorcan M; Elbuluk, Nada; Okereke, Uchenna R; Cohen, David E
2017-04-01
BACKGROUND: While most of the attention regarding skin pigmentation has focused on the effects on ultraviolet radiation, the cutaneous effects of visible light (400 to 700nm) are rarely reported. In this report, we describe a case of painful erythema and induration that resulted from direct irradiation of UV-naïve skin with visible LED light in a patient with Fitzpatrick type II skin.
METHODS AND RESULTS: A 24-year-old healthy woman with Fitzpatrick type II skin presented to our department to participate in a clinical study. As part of the study, the subject underwent visible light irradiation with an LED and halogen incandescent visible light source. After 5 minutes of exposure, the patient complained of appreciable pain at the LED exposed site. Evaluation demonstrated erythema and mild induration. There were no subjective or objective findings at the halogen incandescent irradiated site, which received equivalent fluence (0.55 Watts / cm2). The study was halted as the subject was unable to tolerate the full duration of visible light irradiation.
CONCLUSION: This case illustrates the importance of recognizing the effects of visible light on skin. While the vast majority of investigational research has focused on ultraviolet light, the effects of visible light have been largely overlooked and must be taken into consideration, in all Fitzpatrick skin types.
J Drugs Dermatol. 2017;16(4):388-392.
.Irradiation of skin with visible light induces reactive oxygen species and matrix-degrading enzymes.
Liebel, Frank; Kaur, Simarna; Ruvolo, Eduardo; Kollias, Nikiforos; Southall, Michael D
2012-07-01
Daily skin exposure to solar radiation causes cells to produce reactive oxygen species (ROS), which are a primary factor in skin damage. Although the contribution of the UV component to skin damage has been established, few studies have examined the effects of non-UV solar radiation on skin physiology. Solar radiation comprises <10% of UV, and thus the purpose of this study was to examine the physiological response of skin to visible light (400-700 nm). Irradiation of human skin equivalents with visible light induced production of ROS, proinflammatory cytokines, and matrix metalloproteinase (MMP)-1 expression. Commercially available sunscreens were found to have minimal effects on reducing visible light-induced ROS, suggesting that UVA/UVB sunscreens do not protect the skin from visible light-induced responses. Using clinical models to assess the generation of free radicals from oxidative stress, higher levels of free radical activity were found after visible light exposure. Pretreatment with a photostable UVA/UVB sunscreen containing an antioxidant combination significantly reduced the production of ROS, cytokines, and MMP expression in vitro, and decreased oxidative stress in human subjects after visible light irradiation. Taken together, these findings suggest that other portions of the solar spectrum aside from UV, particularly visible light, may also contribute to signs of premature photoaging in skin.
2007-01-16
Both luminous and translucent, the C ring sweeps out of the darkness of Saturn's shadow and obscures the planet at lower left. The ring is characterized by broad, isolated bright areas, or "plateaus," surrounded by fainter material. This view looks toward the unlit side of the rings from about 19 degrees above the ringplane. North on Saturn is up. The dark, inner B ring is seen at lower right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 15, 2006 at a distance of approximately 632,000 kilometers (393,000 miles) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 56 degrees. Image scale is 34 kilometers (21 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA08855
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
NASA Technical Reports Server (NTRS)
1987-01-01
Remote sensing is the process of acquiring physical information from a distance, obtaining data on Earth features from a satellite or an airplane. Advanced remote sensing instruments detect radiations not visible to the ordinary camera or the human eye in several bands of the spectrum. These data are computer processed to produce multispectral images that can provide enormous amounts of information about Earth objects or phenomena. Since every object on Earth emits or reflects radiation in its own unique signature, remote sensing data can be interpreted to tell the difference between one type of vegetation and another, between densely populated urban areas and lightly populated farmland, between clear and polluted water or in the archeological application between rain forest and hidden man made structures.
Prototype high resolution multienergy soft x-ray array for NSTX.
Tritz, K; Stutman, D; Delgado-Aparicio, L; Finkenthal, M; Kaita, R; Roquemore, L
2010-10-01
A novel diagnostic design seeks to enhance the capability of multienergy soft x-ray (SXR) detection by using an image intensifier to amplify the signals from a larger set of filtered x-ray profiles. The increased number of profiles and simplified detection system provides a compact diagnostic device for measuring T(e) in addition to contributions from density and impurities. A single-energy prototype system has been implemented on NSTX, comprised of a filtered x-ray pinhole camera, which converts the x-rays to visible light using a CsI:Tl phosphor. SXR profiles have been measured in high performance plasmas at frame rates of up to 10 kHz, and comparisons to the toroidally displaced tangential multi-energy SXR have been made.
NASA Astrophysics Data System (ADS)
Koudelka, Petr; Hanulak, Patrik; Jaros, Jakub; Papes, Martin; Latal, Jan; Siska, Petr; Vasinek, Vladimir
2015-07-01
This paper discusses the implementation of a light emitting diode based visible light communication system for optical vehicle-to-vehicle (V2V) communications in road safety applications. The widespread use of LEDs as light sources has reached into automotive fields. For example, LEDs are used for taillights, daytime running lights, brake lights, headlights, and traffic signals. Future in the optical vehicle-to-vehicle (V2V) communications will be based on an optical wireless communication technology that using LED transmitter and a camera receiver (OCI; optical communication image sensor). Utilization of optical V2V communication systems in automotive industry naturally brings a lot of problems. Among them belongs necessity of circuit implementation into the current concepts of electronic LED lights control that allows LED modulation. These circuits are quite complicated especially in case of luxury cars. Other problem is correct design of modulation circuits so that final vehicle lightning using optical vehicle-to-vehicle (V2V) communication meets standard requirements on Photometric Quantities and Beam Homogeneity. Authors of this article performed research on optical vehicle-to-vehicle (V2V) communication possibilities of headlight (Jaguar) and taillight (Skoda) in terms of modulation circuits (M-PSK, M-QAM) implementation into the lamp concepts and final fulfilment of mandatory standards on Photometric Quantities and Beam Homogeneity.
International Space Station Data Collection for Disaster Response
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Evans, Cynthia A.
2015-01-01
Remotely sensed data acquired by orbital sensor systems has emerged as a vital tool to identify the extent of damage resulting from a natural disaster, as well as providing near-real time mapping support to response efforts on the ground and humanitarian aid efforts. The International Space Station (ISS) is a unique terrestrial remote sensing platform for acquiring disaster response imagery. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to autonomous sensor systems in higher altitude polar orbits. NASA remote sensing assets on the station began collecting International Disaster Charter (IDC) response data in May 2012. The initial NASA ISS sensor systems responding to IDC activations included the ISS Agricultural Camera (ISSAC), mounted in the Window Observational Research Facility (WORF); the Crew Earth Observations (CEO) Facility, where the crew collects imagery using off-the-shelf handheld digital cameras; and the Hyperspectral Imager for the Coastal Ocean (HICO), a visible to near-infrared system mounted externally on the Japan Experiment Module Exposed Facility. The ISSAC completed its primary mission in January 2013. It was replaced by the very high resolution ISS SERVIR Environmental Research and Visualization System (ISERV) Pathfinder, a visible-wavelength digital camera, telescope, and pointing system. Since the start of IDC response in 2012 there have been 108 IDC activations; NASA sensor systems have collected data for thirty-two of these events. Of the successful data collections, eight involved two or more ISS sensor systems responding to the same event. Data has also been collected by International Partners in response to natural disasters, most notably JAXA and Roscosmos/Energia through the Urugan program.
Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision
NASA Astrophysics Data System (ADS)
Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.
1995-06-01
Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.
A new apparatus of infrared videopupillography for monitoring pupil size
NASA Astrophysics Data System (ADS)
Ko, M.-L.; Huang, T.-W.; Chen, Y.-Y.; Sone, B.-S.; Huang, Y.-C.; Jeng, W.-D.; Chen, Y.-T.; Hsieh, Y.-F.; Tao, K.-H.; Li, S.-T.; Ou-Yang, M.; Chiou, J.-C.
2013-09-01
Glaucoma was diagnosed or tracked by the intraocular pressure (IOP) generally because it is one of the physiology parameters that are associated with glaucoma. But measurement of IOP is not easy and consistence under different measure conditions. An infrared videopupillography is apparatus to monitor the pupil size in an attempt to bypass the direct IOP measurement. This paper propose an infrared videopupillography to monitoring the pupil size of different light stimulus in dark room. The portable infrared videopupillography contains a camera, a beam splitter, the visible-light LEDs for stimulating the eyes, and the infrared LEDs for lighting the eyes. It is lighter and smaller than the present product. It can modulate for different locations of different eyes, and can be mounted on any eyeglass frame. An analysis program of pupil size can evaluate the pupil diameter by image correlation. In our experiments, the eye diameter curves were not smooth and jagged. It caused by the light spots, lone eyelashes, and blink. In the future, we will improve the analysis program of pupil size and seek the approach to solve the LED light spots. And we hope this infrared videopupillography proposed in this paper can be a measuring platform to explore the relations between the different diseases and pupil response.
Fusion of thermal- and visible-band video for abandoned object detection
NASA Astrophysics Data System (ADS)
Beyan, Cigdem; Yigit, Ahmet; Temizel, Alptekin
2011-07-01
Timely detection of packages that are left unattended in public spaces is a security concern, and rapid detection is important for prevention of potential threats. Because constant surveillance of such places is challenging and labor intensive, automated abandoned-object-detection systems aiding operators have started to be widely used. In many studies, stationary objects, such as people sitting on a bench, are also detected as suspicious objects due to abandoned items being defined as items newly added to the scene and remained stationary for a predefined time. Therefore, any stationary object results in an alarm causing a high number of false alarms. These false alarms could be prevented by classifying suspicious items as living and nonliving objects. In this study, a system for abandoned object detection that aids operators surveilling indoor environments such as airports, railway or metro stations, is proposed. By analysis of information from a thermal- and visible-band camera, people and the objects left behind can be detected and discriminated as living and nonliving, reducing the false-alarm rate. Experiments demonstrate that using data obtained from a thermal camera in addition to a visible-band camera also increases the true detection rate of abandoned objects.