Sample records for visible imaging system

  1. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V

    2013-12-31

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  2. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V.

    2016-05-03

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  3. Device for wavelength-selective imaging

    DOEpatents

    Frangioni, John V.

    2010-09-14

    An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.

  4. Medical imaging systems

    DOEpatents

    Frangioni, John V [Wayland, MA

    2012-07-24

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.

  5. Automated Visibility & Cloud Cover Measurements with a Solid State Imaging System

    DTIC Science & Technology

    1989-03-01

    GL-TR-89-0061 SIO Ref. 89-7 MPL-U-26/89 AUTOMATED VISIBILITY & CLOUD COVER MEASUREMENTS WITH A SOLID-STATE IMAGING SYSTEM C) to N4 R. W. Johnson W. S...include Security Classification) Automated Visibility & Cloud Measurements With A Solid State Imaging System 12. PERSONAL AUTHOR(S) Richard W. Johnson...based imaging systems , their ics and control algorithms, thus they ar.L discussed sepa- initial deployment and the preliminary application of rately

  6. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    PubMed

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  7. A feasibility study of an integrated NIR/gamma/visible imaging system for endoscopic sentinel lymph node mapping.

    PubMed

    Kang, Han Gyu; Lee, Ho-Young; Kim, Kyeong Min; Song, Seong-Hyun; Hong, Gun Chul; Hong, Seong Jong

    2017-01-01

    The aim of this study is to integrate NIR, gamma, and visible imaging tools into a single endoscopic system to overcome the limitation of NIR using gamma imaging and to demonstrate the feasibility of endoscopic NIR/gamma/visible fusion imaging for sentinel lymph node (SLN) mapping with a small animal. The endoscopic NIR/gamma/visible imaging system consists of a tungsten pinhole collimator, a plastic focusing lens, a BGO crystal (11 × 11 × 2 mm 3 ), a fiber-optic taper (front = 11 × 11 mm 2 , end = 4 × 4 mm 2 ), a 122-cm long endoscopic fiber bundle, an NIR emission filter, a relay lens, and a CCD camera. A custom-made Derenzo-like phantom filled with a mixture of 99m Tc and indocyanine green (ICG) was used to assess the spatial resolution of the NIR and gamma images. The ICG fluorophore was excited using a light-emitting diode (LED) with an excitation filter (723-758 nm), and the emitted fluorescence photons were detected with an emission filter (780-820 nm) for a duration of 100 ms. Subsequently, the 99m Tc distribution in the phantom was imaged for 3 min. The feasibility of in vivo SLN mapping with a mouse was investigated by injecting a mixture of 99m Tc-antimony sulfur colloid (12 MBq) and ICG (0.1 mL) into the right paw of the mouse (C57/B6) subcutaneously. After one hour, NIR, gamma, and visible images were acquired sequentially. Subsequently, the dissected SLN was imaged in the same way as the in vivo SLN mapping. The NIR, gamma, and visible images of the Derenzo-like phantom can be obtained with the proposed endoscopic imaging system. The NIR/gamma/visible fusion image of the SLN showed a good correlation among the NIR, gamma, and visible images both for the in vivo and ex vivo imaging. We demonstrated the feasibility of the integrated NIR/gamma/visible imaging system using a single endoscopic fiber bundle. In future, we plan to investigate miniaturization of the endoscope head and simultaneous NIR/gamma/visible imaging with dichroic mirrors and three CCD cameras. © 2016 American Association of Physicists in Medicine.

  8. Systems and Methods for Automated Water Detection Using Visible Sensors

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L. (Inventor); Matthies, Larry H. (Inventor); Bellutta, Paolo (Inventor)

    2016-01-01

    Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.

  9. Optimal design of an earth observation optical system with dual spectral and high resolution

    NASA Astrophysics Data System (ADS)

    Yan, Pei-pei; Jiang, Kai; Liu, Kai; Duan, Jing; Shan, Qiusha

    2017-02-01

    With the increasing demand of the high-resolution remote sensing images by military and civilians, Countries around the world are optimistic about the prospect of higher resolution remote sensing images. Moreover, design a visible/infrared integrative optic system has important value in earth observation. Because visible system can't identify camouflage and recon at night, so we should associate visible camera with infrared camera. An earth observation optical system with dual spectral and high resolution is designed. The paper mainly researches on the integrative design of visible and infrared optic system, which makes the system lighter and smaller, and achieves one satellite with two uses. The working waveband of the system covers visible, middle infrared (3-5um). Dual waveband clear imaging is achieved with dispersive RC system. The focal length of visible system is 3056mm, F/# is 10.91. And the focal length of middle infrared system is 1120mm, F/# is 4. In order to suppress the middle infrared thermal radiation and stray light, the second imaging system is achieved and the narcissus phenomenon is analyzed. The system characteristic is that the structure is simple. And the especial requirements of the Modulation Transfer Function (MTF), spot, energy concentration, and distortion etc. are all satisfied.

  10. Clinical comparative study with a large-area amorphous silicon flat-panel detector: image quality and visibility of anatomic structures on chest radiography.

    PubMed

    Fink, Christian; Hallscheidt, Peter J; Noeldge, Gerd; Kampschulte, Annette; Radeleff, Boris; Hosch, Waldemar P; Kauffmann, Günter W; Hansmann, Jochen

    2002-02-01

    The objective of this study was to compare clinical chest radiographs of a large-area, flat-panel digital radiography system and a conventional film-screen radiography system. The comparison was based on an observer preference study of image quality and visibility of anatomic structures. Routine follow-up chest radiographs were obtained from 100 consecutive oncology patients using a large-area, amorphous silicon flat-panel detector digital radiography system (dose equivalent to a 400-speed film system). Hard-copy images were compared with previous examinations of the same individuals taken on a conventional film-screen system (200-speed). Patients were excluded if changes in the chest anatomy were detected or if the time interval between the examinations exceeded 1 year. Observer preference was evaluated for the image quality and the visibility of 15 anatomic structures using a five-point scale. Dose measurements with a chest phantom showed a dose reduction of approximately 50% with the digital radiography system compared with the film-screen radiography system. The image quality and the visibility of all but one anatomic structure of the images obtained with the digital flat-panel detector system were rated significantly superior (p < or = 0.0003) to those obtained with the conventional film-screen radiography system. The image quality and visibility of anatomic structures on the images obtained by the flat-panel detector system were perceived as equal or superior to the images from conventional film-screen chest radiography. This was true even though the radiation dose was reduced approximately 50% with the digital flat-panel detector system.

  11. Visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals.

    PubMed

    Wang, Baoju; Zhan, Qiuqiang; Zhao, Yuxiang; Wu, Ruitao; Liu, Jing; He, Sailing

    2016-01-25

    Further development of multiphoton microscopic imaging is confronted with a number of limitations, including high-cost, high complexity and relatively low spatial resolution due to the long excitation wavelength. To overcome these problems, for the first time, we propose visible-to-visible four-photon ultrahigh resolution microscopic imaging by using a common cost-effective 730-nm laser diode to excite the prepared Nd(3+)-sensitized upconversion nanoparticles (Nd(3+)-UCNPs). An ordinary multiphoton scanning microscope system was built using a visible CW diode laser and the lateral imaging resolution as high as 161-nm was achieved via the four-photon upconversion process. The demonstrated large saturation excitation power for Nd(3+)-UCNPs would be more practical and facilitate the four-photon imaging in the application. A sample with fine structure was imaged to demonstrate the advantages of visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals. Combining the uniqueness of UCNPs, the proposed visible-to-visible four-photon imaging would be highly promising and attractive in the field of multiphoton imaging.

  12. Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization

    PubMed Central

    Chong, Shau Poh; Zhang, Tingwei; Kho, Aaron; Bernucci, Marcel T.; Dubra, Alfredo; Srinivasan, Vivek J.

    2018-01-01

    Chromatic aberrations are an important design consideration in high resolution, high bandwidth, refractive imaging systems that use visible light. Here, we present a fiber-based spectral/Fourier domain, visible light OCT ophthalmoscope corrected for the average longitudinal chromatic aberration (LCA) of the human eye. Analysis of complex speckles from in vivo retinal images showed that achromatization resulted in a speckle autocorrelation function that was ~20% narrower in the axial direction, but unchanged in the transverse direction. In images from the improved, achromatized system, the separation between Bruch’s membrane (BM), the retinal pigment epithelium (RPE), and the outer segment tips clearly emerged across the entire 6.5 mm field-of-view, enabling segmentation and morphometry of BM and the RPE in a human subject. Finally, cross-sectional images depicted distinct inner retinal layers with high resolution. Thus, with chromatic aberration compensation, visible light OCT can achieve volume resolutions and retinal image quality that matches or exceeds ultrahigh resolution near-infrared OCT systems with no monochromatic aberration compensation. PMID:29675296

  13. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  14. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  15. Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.

    1995-06-01

    Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.

  16. In vivo high-resolution cortical imaging with extended-focus optical coherence microscopy in the visible-NIR wavelength range

    NASA Astrophysics Data System (ADS)

    Marchand, Paul J.; Szlag, Daniel; Bouwens, Arno; Lasser, Theo

    2018-03-01

    Visible light optical coherence tomography has shown great interest in recent years for spectroscopic and high-resolution retinal and cerebral imaging. Here, we present an extended-focus optical coherence microscopy system operating from the visible to the near-infrared wavelength range for high axial and lateral resolution imaging of cortical structures in vivo. The system exploits an ultrabroad illumination spectrum centered in the visible wavelength range (λc = 650 nm, Δλ ˜ 250 nm) offering a submicron axial resolution (˜0.85 μm in water) and an extended-focus configuration providing a high lateral resolution of ˜1.4 μm maintained over ˜150 μm in depth in water. The system's axial and lateral resolution are first characterized using phantoms, and its imaging performance is then demonstrated by imaging the vasculature, myelinated axons, and neuronal cells in the first layers of the somatosensory cortex of mice in vivo.

  17. Automatic visibility retrieval from thermal camera images

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan

    2017-10-01

    This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.

  18. Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.

    PubMed

    Kappadath, S Cheenu; Shaw, Chris C

    2005-11-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.

  19. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kappadath, S. Cheenu; Shaw, Chris C.

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less

  20. Visualization of Penile Suspensory Ligamentous System Based on Visible Human Data Sets

    PubMed Central

    Chen, Xianzhuo; Wu, Yi; Tao, Ling; Yan, Yan; Pang, Jun; Zhang, Shaoxiang; Li, Shirong

    2017-01-01

    Background The aim of this study was to use a three-dimensional (3D) visualization technology to illustrate and describe the anatomical features of the penile suspensory ligamentous system based on the Visible Human data sets and to explore the suspensory mechanism of the penis for the further improvement of the penis-lengthening surgery. Material/Methods Cross-sectional images retrieved from the first Chinese Visible Human (CVH-1), third Chinese Visible Human (CVH-3), and Visible Human Male (VHM) data sets were used to segment the suspensory ligamentous system and its adjacent structures. The magnetic resonance imaging (MRI) images of this system were studied and compared with those from the Visible Human data sets. The 3D models reconstructed from the Visible Human data sets were used to provide morphological features of the penile suspensory ligamentous system and its related structures. Results The fundiform ligament was a superficial, loose, fibro-fatty tissue which originated from Scarpa’s fascia superiorly and continued to the scrotal septum inferiorly. The suspensory ligament and arcuate pubic ligament were dense fibrous connective tissues which started from the pubic symphysis and terminated by attaching to the tunica albuginea of the corpora cavernosa. Furthermore, the arcuate pubic ligament attached to the inferior rami of the pubis laterally. Conclusions The 3D model based on Visible Human data sets can be used to clarify the anatomical features of the suspensory ligamentous system, thereby contributing to the improvement of penis-lengthening surgery. PMID:28530218

  1. Visualization of Penile Suspensory Ligamentous System Based on Visible Human Data Sets.

    PubMed

    Chen, Xianzhuo; Wu, Yi; Tao, Ling; Yan, Yan; Pang, Jun; Zhang, Shaoxiang; Li, Shirong

    2017-05-22

    BACKGROUND The aim of this study was to use a three-dimensional (3D) visualization technology to illustrate and describe the anatomical features of the penile suspensory ligamentous system based on the Visible Human data sets and to explore the suspensory mechanism of the penis for the further improvement of the penis-lengthening surgery. MATERIAL AND METHODS Cross-sectional images retrieved from the first Chinese Visible Human (CVH-1), third Chinese Visible Human (CVH-3), and Visible Human Male (VHM) data sets were used to segment the suspensory ligamentous system and its adjacent structures. The magnetic resonance imaging (MRI) images of this system were studied and compared with those from the Visible Human data sets. The 3D models reconstructed from the Visible Human data sets were used to provide morphological features of the penile suspensory ligamentous system and its related structures. RESULTS The fundiform ligament was a superficial, loose, fibro-fatty tissue which originated from Scarpa's fascia superiorly and continued to the scrotal septum inferiorly. The suspensory ligament and arcuate pubic ligament were dense fibrous connective tissues which started from the pubic symphysis and terminated by attaching to the tunica albuginea of the corpora cavernosa. Furthermore, the arcuate pubic ligament attached to the inferior rami of the pubis laterally. CONCLUSIONS The 3D model based on Visible Human data sets can be used to clarify the anatomical features of the suspensory ligamentous system, thereby contributing to the improvement of penis-lengthening surgery.

  2. Binary-space-partitioned images for resolving image-based visibility.

    PubMed

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  3. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  4. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  5. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  6. Phase Curves of Nix and Hydra from the New Horizons Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Verbiscer, Anne J.; Porter, Simon B.; Buratti, Bonnie J.; Weaver, Harold A.; Spencer, John R.; Showalter, Mark R.; Buie, Marc W.; Hofgartner, Jason D.; Hicks, Michael D.; Ennico-Smith, Kimberly; Olkin, Catherine B.; Stern, S. Alan; Young, Leslie A.; Cheng, Andrew; (The New Horizons Team

    2018-01-01

    NASA’s New Horizons spacecraft’s voyage through the Pluto system centered on 2015 July 14 provided images of Pluto’s small satellites Nix and Hydra at viewing angles unattainable from Earth. Here, we present solar phase curves of the two largest of Pluto’s small moons, Nix and Hydra, observed by the New Horizons LOng Range Reconnaissance Imager and Multi-spectral Visible Imaging Camera, which reveal the scattering properties of their icy surfaces in visible light. Construction of these solar phase curves enables comparisons between the photometric properties of Pluto’s small moons and those of other icy satellites in the outer solar system. Nix and Hydra have higher visible albedos than those of other resonant Kuiper Belt objects and irregular satellites of the giant planets, but not as high as small satellites of Saturn interior to Titan. Both Nix and Hydra appear to scatter visible light preferentially in the forward direction, unlike most icy satellites in the outer solar system, which are typically backscattering.

  7. Superpixel segmentation and pigment identification of colored relics based on visible spectral image.

    PubMed

    Li, Junfeng; Wan, Xiaoxia

    2018-01-15

    To enrich the contents of digital archive and to guide the copy and restoration of colored relics, non-invasive methods for extraction of painting boundary and identification of pigment composition are proposed in this study based on the visible spectral images of colored relics. Superpixel concept is applied for the first time to the field of oversegmentation of visible spectral images and implemented on the visible spectral images of colored relics to extract their painting boundary. Since different pigments are characterized by their own spectrum and the same kind of pigment has the similar geometric profile in spectrum, an automatic identification method is established by comparing the proximity between the geometric profiles of the unknown spectrum from each superpixel and the pre-known spectrum from a deliberately prepared database. The methods are validated using the visible spectral images of the ancient wall paintings in Mogao Grottoes. By the way, the visible spectral images are captured by a multispectral imaging system consisting of two broadband filters and a RGB camera with high spatial resolution. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A RONI Based Visible Watermarking Approach for Medical Image Authentication.

    PubMed

    Thanki, Rohit; Borra, Surekha; Dwivedi, Vedvyas; Borisagar, Komal

    2017-08-09

    Nowadays medical data in terms of image files are often exchanged between different hospitals for use in telemedicine and diagnosis. Visible watermarking being extensively used for Intellectual Property identification of such medical images, leads to serious issues if failed to identify proper regions for watermark insertion. In this paper, the Region of Non-Interest (RONI) based visible watermarking for medical image authentication is proposed. In this technique, to RONI of the cover medical image is first identified using Human Visual System (HVS) model. Later, watermark logo is visibly inserted into RONI of the cover medical image to get watermarked medical image. Finally, the watermarked medical image is compared with the original medical image for measurement of imperceptibility and authenticity of proposed scheme. The experimental results showed that this proposed scheme reduces the computational complexity and improves the PSNR when compared to many existing schemes.

  9. Cancer Cases from ACRIN Digital Mammographic Imaging Screening Trial: Radiologist Analysis with Use of a Logistic Regression Model1

    PubMed Central

    Pisano, Etta D.; Acharyya, Suddhasatta; Cole, Elodia B.; Marques, Helga S.; Yaffe, Martin J.; Blevins, Meredith; Conant, Emily F.; Hendrick, R. Edward; Baum, Janet K.; Fajardo, Laurie L.; Jong, Roberta A.; Koomen, Marcia A.; Kuzmiak, Cherie M.; Lee, Yeonhee; Pavic, Dag; Yoon, Sora C.; Padungchaichote, Wittaya; Gatsonis, Constantine

    2009-01-01

    Purpose: To determine which factors contributed to the Digital Mammographic Imaging Screening Trial (DMIST) cancer detection results. Materials and Methods: This project was HIPAA compliant and institutional review board approved. Seven radiologist readers reviewed the film hard-copy (screen-film) and digital mammograms in DMIST cancer cases and assessed the factors that contributed to lesion visibility on both types of images. Two multinomial logistic regression models were used to analyze the combined and condensed visibility ratings assigned by the readers to the paired digital and screen-film images. Results: Readers most frequently attributed differences in DMIST cancer visibility to variations in image contrast—not differences in positioning or compression—between digital and screen-film mammography. The odds of a cancer being more visible on a digital mammogram—rather than being equally visible on digital and screen-film mammograms—were significantly greater for women with dense breasts than for women with nondense breasts, even with the data adjusted for patient age, lesion type, and mammography system (odds ratio, 2.28; P < .0001). The odds of a cancer being more visible at digital mammography—rather than being equally visible at digital and screen-film mammography—were significantly greater for lesions imaged with the General Electric digital mammography system than for lesions imaged with the Fischer (P = .0070) and Fuji (P = .0070) devices. Conclusion: The significantly better diagnostic accuracy of digital mammography, as compared with screen-film mammography, in women with dense breasts demonstrated in the DMIST was most likely attributable to differences in image contrast, which were most likely due to the inherent system performance improvements that are available with digital mammography. The authors conclude that the DMIST results were attributable primarily to differences in the display and acquisition characteristics of the mammography devices rather than to reader variability. PMID:19703878

  10. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  11. Multi-spectral imaging with infrared sensitive organic light emitting diode

    PubMed Central

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-01-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589

  12. Multi-spectral imaging with infrared sensitive organic light emitting diode

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-08-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.

  13. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  14. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  15. Optical design and system calibration for three-band spectral imaging system with interchangeable filters

    USDA-ARS?s Scientific Manuscript database

    The design and calibration of a three-band image acquisition system was reported. The prototype system developed in this research was a three-band spectral imaging system that acquired two visible (510 and 568 nm) images and a near-infrared (NIR) (800 nm) image simultaneously. The system was proto...

  16. [Design and analysis of a novel light visible spectrum imaging spectrograph optical system].

    PubMed

    Shen, Man-de; Li, Fei; Zhou, Li-bing; Li, Cheng; Ren, Huan-huan; Jiang, Qing-xiu

    2015-02-01

    A novel visible spectrum imaging spectrograph optical system was proposed based on the negative dispersion, the arbitrary phase modulation characteristics of diffractive optical element and the aberration correction characteristics of freeform optical element. The double agglutination lens was substituted by a hybrid refractive/diffractive lens based on the negative dispersion of diffractive optical element. Two freeform optical elements were used in order to correct some aberration based on the aberration correction characteristics of freeform optical element. An example and frondose design process were presented. When the design parameters were uniform, compared with the traditional system, the novel visible spectrum imaging spectrograph optical system's weight was reduced by 22.9%, the total length was reduced by 26.6%, the maximal diameter was reduced by 30.6%, and the modulation transfer function (MTF) in 1.0 field-of-view was improved by 0.35 with field-of-view improved maximally. The maximal distortion was reduced by 1.6%, the maximal longitudinal aberration was reduced by 56.4%, and the lateral color aberration was reduced by 59. 3%. From these data, we know that the performance of the novel system was advanced quickly and it could be used to put forward a new idea for modern visible spectrum imaging spectrograph optical system design.

  17. Multispectral imaging with vertical silicon nanowires

    PubMed Central

    Park, Hyunsung; Crozier, Kenneth B.

    2013-01-01

    Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156

  18. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  19. A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor†

    PubMed Central

    Okino, Toru; Yamahira, Seiji; Yamada, Shota; Hirose, Yutaka; Odagawa, Akihiro; Kato, Yoshihisa; Tanaka, Tsuyoshi

    2018-01-01

    We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. PMID:29361742

  20. The research on a novel type of the solar-blind UV head-mounted displays

    NASA Astrophysics Data System (ADS)

    Zhao, Shun-long

    2011-08-01

    Ultraviolet technology of detecting is playing a more and more important role in the field of civil application, especially in the corona discharge detection, in modern society. Now the UV imaging detector is one of the most important equipments in power equipment flaws detection. And the modern head-mounted displays (HMDs) have shown the applications in the fields of military, industry production, medical treatment, entertainment, 3D visualization, education and training. We applied the system of head-mounted displays to the UV image detection, and a novel type of head-mounted displays is presented: the solar-blind UV head-mounted displays. And the structure is given. By the solar-blind UV head-mounted displays, a real-time, isometric and visible image of the corona discharge is correctly displayed upon the background scene where it exists. The user will see the visible image of the corona discharge on the real scene rather than on a small screen. Then the user can easily find out the power equipment flaws and repair them. Compared with the traditional UV imaging detector, the introducing of the HMDs simplifies the structure of the whole system. The original visible spectrum optical system is replaced by the eye in the solar-blind UV head-mounted displays. And the optical image fusion technology would be used rather than the digital image fusion system which is necessary in traditional UV imaging detector. That means the visible spectrum optical system and digital image fusion system are not necessary. This makes the whole system cheaper than the traditional UV imaging detector. Another advantage of the solar-blind UV head-mounted displays is that the two hands of user will be free. So while observing the corona discharge the user can do some things about it. Therefore the solar-blind UV head-mounted displays can make the corona discharge expose itself to the user in a better way, and it will play an important role in corona detection in the future.

  1. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  2. Study of optical design of three-dimensional digital ophthalmoscopes.

    PubMed

    Fang, Yi-Chin; Yen, Chih-Ta; Chu, Chin-Hsien

    2015-10-01

    This study primarily involves using optical zoom structures to design a three-dimensional (3D) human-eye optical sensory system with infrared and visible light. According to experimental data on two-dimensional (2D) and 3D images, human-eye recognition of 3D images is substantially higher (approximately 13.182%) than that of 2D images. Thus, 3D images are more effective than 2D images when they are used at work or in high-recognition devices. In the optical system design, infrared and visible light wavebands were incorporated as light sources to perform simulations. The results can be used to facilitate the design of optical systems suitable for 3D digital ophthalmoscopes.

  3. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  4. SU-E-J-42: Evaluation of Fiducial Markers for Ultrasound and X-Ray Images Used for Motion Tracking in Pancreas SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, SK; Armour, E; Su, L

    Purpose Ultrasound tracking of target motion relies on visibility of vascular and/or anatomical landmark. However this is challenging when the target is located far from vascular structures or in organs that lack ultrasound landmark structure, such as in the case of pancreas cancer. The purpose of this study is to evaluate visibility, artifacts and distortions of fusion coils and solid gold markers in ultrasound, CT, CBCT and kV images to identify markers suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment. Methods Two fusion coils (1mm × 5mm and 1mm × 10 mm) and a solid goldmore » marker (0.8mm × 10mm) were embedded in a tissue–like ultrasound phantom. The phantom (5cm × 12cm × 20cm) was prepared using water, gelatin and psyllium-hydrophilic-mucilloid fiber. Psylliumhydrophilic mucilloid acts as scattering medium to produce echo texture that simulates sonographic appearance of human tissue in ultrasound images while maintaining electron density close to that of water in CT images. Ultrasound images were acquired using 3D-ultrasound system with markers embedded at 5, 10 and 15mm depth from phantom surface. CT images were acquired using Philips Big Bore CT while CBCT and kV images were acquired with XVI-system (Elexta). Visual analysis was performed to compare visibility of the markers and visibility score (1 to 3) were assigned. Results All markers embedded at various depths are clearly visible (score of 3) in ultrasound images. Good visibility of all markers is observed in CT, CBCT and kV images. The degree of artifact produced by the markers in CT and CBCT images are indistinguishable. No distortion is observed in images from any modalities. Conclusion All markers are visible in images across all modalities in this homogenous tissue-like phantom. Human subject data is necessary to confirm the marker type suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment.« less

  5. Regional Sediment Management Experiment Using the Visible/Infrared Imager/Radiometer Suite and the Landsat Data Continuity Mission Sensor

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    The central aim of this RPC (Rapid Prototyping Capability) experiment is to demonstrate the use of VIIRS (Visible/Infrared Imager/ Radiometer Suite and LDCM (Landsat Data Continuity Mission) sensors as key input to the RSM (Regional Sediment Management) GIS (geographic information system) DSS (Decision Support System). The project affects the Coastal Management National Application.

  6. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-07-21

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.

  7. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264

  8. Odyssey/White Rock

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These Mars Odyssey images show the 'White Rock' feature on Mars in both infrared (left) and visible (right) wavelengths. The images were acquired simultaneously on March 11, 2002. The box shows where the visible image is located in the infrared image. 'White Rock' is the unofficial name for this unusual landform that was first observed during the Mariner 9 mission in the early 1970's. The variations in brightness in the infrared image are due to differences in surface temperature, where dark is cool and bright is warm. The dramatic differences between the infrared and visible views of White Rock are the result of solar heating. The relatively bright surfaces observed at visible wavelengths reflect more solar energy than the darker surfaces, allowing them to stay cooler and thus they appear dark in the infrared image. The new thermal emission imaging system data will help to address the long standing question of whether the White Rock deposit was produced in an ancient crater lake or by dry processes of volcanic or wind deposition. The infrared image has a resolution of 100 meters (328 feet) per pixel and is 32 kilometers (20 miles) wide. The visible image has a resolution of 18 meters per pixel and is approximately 18 kilometers (11 miles) wide. The images are centered at 8.2 degrees south latitude and 24.9 degrees east longitude.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  9. Broadband image sensor array based on graphene-CMOS integration

    NASA Astrophysics Data System (ADS)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  10. A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

    NASA Astrophysics Data System (ADS)

    Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-05-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  11. A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  12. The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.

    PubMed

    Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A

    2010-08-01

    The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.

  13. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Image-intensified fluoroscopic x-ray system. 892... fluoroscopic x-ray system. (a) Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image...

  14. 21 CFR 892.1650 - Image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Image-intensified fluoroscopic x-ray system. 892... fluoroscopic x-ray system. (a) Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image...

  15. Coastal Research Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Coastal Research Imaging Spectrometer (CRIS) is an airborne remote-sensing system designed specifically for research on the physical, chemical, and biological characteristics of coastal waters. The CRIS includes a visible-light hyperspectral imaging subsystem for measuring the color of water, which contains information on the biota, sediment, and nutrient contents of the water. The CRIS also includes an infrared imaging subsystem, which provides information on the temperature of the water. The combination of measurements enables investigation of biological effects of both natural and artificial flows of water from land into the ocean, including diffuse and point-source flows that may contain biological and/or chemical pollutants. Temperature is an important element of such measurements because temperature contrasts can often be used to distinguish among flows from different sources: for example, a sewage outflow could manifest itself in spectral images as a local high-temperature anomaly.anomaly. Both the visible and infrared subsystems scan in "pushbroom" mode: that is, an aircraft carrying the system moves along a ground track, the system is aimed downward, and image data are acquired in acrosstrack linear arrays of pixels. Both subsystems operate at a frame rate of 30 Hz. The infrared and visible-light optics are adjusted so that both subsystems are aimed at the same moving swath, which has across-track angular width of 15. Data from the infrared and visible imaging subsystems are stored in the same file along with aircraft-position data acquired by a Global Positioning System receiver. The combination of the three sets of data is used to construct infrared and hyperspectral maps of scanned areas shown.

  16. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  17. Active imaging with the aids of polarization retrieve in turbid media system

    NASA Astrophysics Data System (ADS)

    Tao, Qiangqiang; Sun, Yongxuan; Shen, Fei; Xu, Qiang; Gao, Jun; Guo, Zhongyi

    2016-01-01

    We propose a novel active imaging based on the polarization retrieve (PR) method in turbid media system. In our simulations, the Monte Carlo (MC) algorithm has been used to investigate the scattering process between the incident photons and the scattering particles, and the visually concordant object but with different polarization characteristics in different regions, has been selected as the original target that is placed in the turbid media. Under linearly and circularly polarized illuminations, the simulation results demonstrate that the corresponding polarization properties can provide additional information for the imaging, and the contrast of the polarization image can also be enhanced greatly compared to the simplex intensity image in the turbid media. Besides, the polarization image adjusted by the PR method can further enhance the visibility and contrast. In addition, by PR imaging method, with the increasing particles' size in Mie's scale, the visibility can be enhanced, because of the increased forward scattering effect. In general, in the same circumstance, the circular polarization images can offer a better contrast and visibility than that of linear ones. The results indicate that the PR imaging method is more applicable to the scattering media system with relatively larger particles such as aerosols, heavy fog, cumulus, and seawater, as well as to biological tissues and blood media.

  18. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  19. Imaging of Stellar Surfaces with the Navy Precision Optical Interferometer

    NASA Astrophysics Data System (ADS)

    Jorgensen, A.; Schmitt, H. R.; van Belle, G. T.; Hutter, Clark; Mozurkewich, D.; Armstrong, J. T.; Baines, E. K.; Restaino, S. R.

    The Navy Precision Optical Interferometer (NPOI) has a unique layout which is particularly well-suited for high-resolution interferometric imaging. By combining the NPOI layout with a new data acquisition and fringe tracking system we are progressing toward a imaging capability which will exceed any other interferometer in operation. The project, funded by the National Science Foundation, combines several existing advances and infrastructure at NPOI with modest enhancements. For optimal imaging there are several requirements that should be fulfilled. The observatory should be capable of measuring visibilities on a wide range of baseline lengths and orientations, providing complete UV coverage in a short period of time. It should measure visibility amplitudes with good SNR on all baselines as critical imaging information is often contained in low-amplitude visibilities. It should measure the visibility phase on all baselines. The technologies which can achieve this are the NPOI Y-shaped array with (nearly) equal spacing between telescopes and an ability for rapid configuration. Placing 6-telescopes in a row makes it possible to measure visibilities into the 4th lobe of the visibility function. By arranging the available telescopes carefully we will be able to switch, every few days, between 3 different 6-station chains which provide symmetric coverage in the UV (Fourier) plane without moving any telescopes, only by moving beam relay mirrors. The 6-station chains are important to achieve the highest imaging resolution, and switching rapidly between station chains provides uniform coverage. Coherent integration techniques can be used to obtain good SNR on very small visibilities. Coherently integrated visibilities can be used for imaging with standard radio imaging packages such as AIPS. The commissioning of one additional station, the use of new data acquisition hardware and fringe tracking algorithms are the enhancements which make this project possible.

  20. Imaging System Performance and Visibility as Affected by the Physical Environment

    DTIC Science & Technology

    2013-09-30

    devoted to the topic of light propagation and imaging across the air-sea interface and within the surface boundary layer of natural water bodies...Zaneveld and Pegau (2003) was used to estimate the horizontal visibility of a black target, y: y = 4.8 / α, (2) where α is the...attenuation coefficient at 532 nm, was necessary for predictions of horizontal visibility of a black target. Equations (2) and (3) were applied to IOP data

  1. 21 CFR 892.1660 - Non-image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Non-image-intensified fluoroscopic x-ray system... fluoroscopic x-ray system. (a) Identification. A non-image-intensified fluoroscopic x-ray system is a device... of x-radiation into a visible image. This generic type of device may include signal analysis and...

  2. 21 CFR 892.1660 - Non-image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Non-image-intensified fluoroscopic x-ray system... fluoroscopic x-ray system. (a) Identification. A non-image-intensified fluoroscopic x-ray system is a device... of x-radiation into a visible image. This generic type of device may include signal analysis and...

  3. Visible digital watermarking system using perceptual models

    NASA Astrophysics Data System (ADS)

    Cheng, Qiang; Huang, Thomas S.

    2001-03-01

    This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.

  4. Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications

    NASA Astrophysics Data System (ADS)

    Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.

    2013-05-01

    The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.

  5. In-vessel visible inspection system on KSTAR

    NASA Astrophysics Data System (ADS)

    Chung, Jinil; Seo, D. C.

    2008-08-01

    To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.

  6. GOES Imager Instrument - NOAA Satellite Information System (NOAASIS);

    Science.gov Websites

    Instrument Characteristics (GOES I-M) Channel number: 1 (Visible) 2 (Shortwave) 3 (Moisture) 4 (IR 1) 5 (IR 2 ) Infrared: 30 minutes typical System absolute accuracy IR channels: less than or equal to 1 K Visible

  7. TU-E-217BCD-06: Cone Beam Breast CT with a High Resolution Flat Panel Detector-Improvement of Calcification Visibility.

    PubMed

    Shen, Y; Zhong, Y; Lai, C; Wang, T; Shaw, C

    2012-06-01

    To investigate the advantage of a high resolution flat panel detector for improving the visibility of microcalcifications (MCs) in cone beam breast CT Methods: A paraffin cylinder was used to simulate a 100% adipose breast. Calcium carbonate grains, ranging from 125-140 μm to 224 - 250 μm in size, were used to simulate the MCs. Groups of 25 same size MCs were embedded at the phantom center. The phantom was scanned with a bench-top CBCT system at various exposure levels. A 75μm pitch flat panel detector (Dexela 2923, Perkin Elmer) with 500μm thick CsI scintillator plate was used as the high resolution detector. A 194 μm pitch detector (Paxscan 4030CB, Varian Medical Systems) was used for reference. 300 projection images were acquired over 360° and reconstructed. The images were reviewed by 6 readers. The MC visibility was quantified as the fraction of visible MCs and averaged for comparison. The visibility was plotted as a function of the estimated dose level for various MC sizes and detectors. The MTFs and DQEs were measured and compared. For imaging small (200 μm and smaller) MCs, the visibility achieved with the 75μm pitch detector was found to be significantly higher than those achieved with the 194μm pitch detector. For imaging larger MCs, there was little advantage in using the 75μm pitch detector. Using the 75μm pitch detector, MCs as small as 180 μm could be imaged to achieve a visibility of 78% with an isocenter tissue dose of ∼20 mGys versus 62% achieved with the 194 μm pitch detector at the same dose level. It was found that a high pitch flat panel detector had the advantages of extending its imaging capability to higher frequencies thus helping improve the visibility when used to image small MCs. This work was supported in part by grants CA104759, CA13852 and CA124585 from NIH-NCI, a grant EB00117 from NIH-NIBIB, and a subcontract from NIST-ATP. © 2012 American Association of Physicists in Medicine.

  8. SU-E-J-59: Feasibility of Markerless Tumor Tracking by Sequential Dual-Energy Fluoroscopy On a Clinical Tumor Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhont, J; Poels, K; Verellen, D

    2015-06-15

    Purpose: To evaluate the feasibility of markerless tumor tracking through the implementation of a novel dual-energy imaging approach into the clinical dynamic tracking (DT) workflow of the Vero SBRT system. Methods: Two sequential 20 s (11 Hz) fluoroscopy sequences were acquired at the start of one fraction for 7 patients treated for primary and metastatic lung cancer with DT on the Vero system. Sequences were acquired using 2 on-board kV imaging systems located at ±45° from the MV beam axis, at respectively 60 kVp (3.2 mAs) and 120 kVp (2.0 mAs). Offline, a normalized cross-correlation algorithm was applied to matchmore » the high (HE) and low energy (LE) images. Per breathing phase (inhale, exhale, maximum inhale and maximum exhale), the 5 best-matching HE and LE couples were extracted for DE subtraction. A contrast analysis according to gross tumor volume was conducted based on contrast-to-noise ratio (CNR). Improved tumor visibility was quantified using an improvement ratio. Results: Using the implanted fiducial as a benchmark, HE-LE sequence matching was effective for 13 out of 14 imaging angles. Overlying bony anatomy was removed on all DE images. With the exception of two imaging angles, the DE images showed no significantly improved tumor visibility compared to HE images, with an improvement ratio averaged over all patients of 1.46 ± 1.64. Qualitatively, it was observed that for those imaging angles that showed no significantly improved CNR, the tumor tissue could not be reliably visualized on neither HE nor DE images due to a total or partial overlap with other soft tissue. Conclusion: Dual-energy subtraction imaging by sequential orthogonal fluoroscopy was shown feasible by implementing an additional LE fluoroscopy sequence. However, for most imaging angles, DE images did not provide improved tumor visibility over single-energy images. Optimizing imaging angles is likely to improve tumor visibility and the efficacy of dual-energy imaging. This work was in part sponsored by corporate funding from BrainLAB AG.(BrainLAB AG, Feldkirchen, Germany)« less

  9. Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel

    2010-01-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less

  10. Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.

    2010-04-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.

  11. Towards combined optical coherence tomography and hyper-spectral imaging for gastrointestinal endoscopy

    NASA Astrophysics Data System (ADS)

    Attendu, Xavier; Crunelle, Camille; de Sivry-Houle, Martin Poinsinet; Maubois, Billie; Urbain, Joanie; Turrell, Chloe; Strupler, Mathias; Godbout, Nicolas; Boudoux, Caroline

    2018-04-01

    Previous works have demonstrated feasibility of combining optical coherence tomography (OCT) and hyper-spectral imaging (HSI) through a single double-clad fiber (DCF). In this proceeding we present the continued development of a system combining both modalities and capable of rapid imaging. We discuss the development of a rapidly scanning, dual-band, polygonal swept-source system which combines NIR (1260-1340 nm) and visible (450-800 nm) wavelengths. The NIR band is used for OCT imaging while visible light allows HSI. Scanning rates up to 24 kHz are reported. Furthermore, we present and discuss the fiber system used for light transport, delivery and collection, and the custom signal acquisition software. Key points include the use of a double-clad fiber coupler as well as important alignments and back-reflection management. Simultaneous and co-registered imaging with both modalities is presented in a bench-top system

  12. Radiometric sensitivity comparisons of multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Lu, Nadine C.; Slater, Philip N.

    1989-01-01

    Multispectral imaging systems provide much of the basic data used by the land and ocean civilian remote-sensing community. There are numerous multispectral imaging systems which have been and are being developed. A common way to compare the radiometric performance of these systems is to examine their noise-equivalent change in reflectance, NE Delta-rho. The NE Delta-rho of a system is the reflectance difference that is equal to the noise in the recorded signal. A comparison is made of the noise equivalent change in reflectance of seven different multispectral imaging systems (AVHRR, AVIRIS, ETM, HIRIS, MODIS-N, SPOT-1, HRV, and TM) for a set of three atmospheric conditions (continental aerosol with 23-km visibility, continental aerosol with 5-km visibility, and a Rayleigh atmosphere), five values of ground reflectance (0.01, 0.10, 0.25, 0.50, and 1.00), a nadir viewing angle, and a solar zenith angle of 45 deg.

  13. 640x512 pixel InGaAs FPAs for short-wave infrared and visible light imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiumei; Yang, Bo; Huang, Songlei; Wei, Yang; Li, Xue; Zhu, Xianliang; Li, Tao; Chen, Yu; Gong, Haimei

    2017-08-01

    The spectral irradiance of moonlight and air glow is mainly in the wavelength region from visible to short-wave infrared (SWIR) band. The imaging over the wavelength range of visible to SWIR is of great significance for applications such as civil safety, night vision, and agricultural sorting. In this paper, 640×512 visible-SWIR InGaAs focal plane arrays (FPAs) were studied for night vision and SWIR imaging. A special epitaxial wafer structure with etch-stop layer was designed and developed. Planar-type 640×512 InGaAs detector arrays were fabricated. The photosensitive arrays were bonded with readout circuit through Indium bumps by flip-chip process. Then, the InP substrate was removed by mechanical thinning and chemical wet etching. The visible irradiance can reach InGaAs absorption layer and then to be detected. As a result, the detection spectrum of the InGaAs FPAs has been extended toward visible spectrum from 0.5μm to 1.7μm. The quantum efficiency is approximately 15% at 0.5μm, 30% at 0.7μm, 50% at 0.8μm, 90% at 1.55μm. The average peak detectivity is higher than 2×1012 cm·Hz1/2/W at room temperature with an integrated time of 10 ms. The Visible-SWIR InGaAs FPAs were applied to an imaging system for SWIR and visible light imaging.

  14. First THEMIS Infrared and Visible Images of Mars

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This picture shows both a visible and a thermal infrared image taken by the thermal emission imaging system on NASA's 2001 Mars Odyssey spacecraft on November 2, 2001. The images were taken as part of the ongoing calibration and testing of the camera system as the spacecraft orbited Mars on its 13threvolution of the planet.

    The visible wavelength image, shown on the right in black and white, was obtained using one of the instrument's five visible filters. The spacecraft was approximately 22,000 kilometers (about 13,600 miles) above Mars looking down toward the south pole when this image was acquired. It is late spring in the martian southern hemisphere.

    The thermal infrared image, center, shows the temperature of the surface in color. The circular feature seen in blue is the extremely cold martian south polar carbon dioxide ice cap. The instrument has measured a temperature of minus 120 degrees Celsius (minus 184 degrees Fahrenheit) on the south polar ice cap. The polar cap is more than 900 kilometers (540 miles) in diameter at this time.

    The visible image shows additional details along the edge of the ice cap, as well as atmospheric hazes near the cap. The view of the surface appears hazy due to dust that still remains in the martian atmosphere from the massive martian dust storms that have occurred over the past several months.

    The infrared image covers a length of over 6,500 kilometers (3,900 miles)spanning the planet from limb to limb, with a resolution of approximately 5.5 kilometers per picture element, or pixel, (3.4 miles per pixel) at the point directly beneath the spacecraft. The visible image has a resolution of approximately 1 kilometer per pixel (.6 miles per pixel) and covers an area roughly the size of the states of Arizona and New Mexico combined.

    An annotated image is available at the same resolution in tiff format. Click the image to download (note: it is a 5.2 mB file) [figure removed for brevity, see original site]

    NASA's Jet Propulsion Laboratory, Pasadena, Calif. manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington D.C. The thermal-emission imaging system was developed at Arizona State University,Tempe, with Raytheon Santa Barbara Remote Sensing, Santa Barbara, Calif. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  15. The design of visible system for improving the measurement accuracy of imaging points

    NASA Astrophysics Data System (ADS)

    Shan, Qiu-sha; Li, Gang; Zeng, Luan; Liu, Kai; Yan, Pei-pei; Duan, Jing; Jiang, Kai

    2018-02-01

    It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1#65285;, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f '=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm) the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.

  16. Visible, Very Near IR and Short Wave IR Hyperspectral Drone Imaging System for Agriculture and Natural Water Applications

    NASA Astrophysics Data System (ADS)

    Saari, H.; Akujärvi, A.; Holmlund, C.; Ojanen, H.; Kaivosoja, J.; Nissinen, A.; Niemeläinen, O.

    2017-10-01

    The accurate determination of the quality parameters of crops requires a spectral range from 400 nm to 2500 nm (Kawamura et al., 2010, Thenkabail et al., 2002). Presently the hyperspectral imaging systems that cover this wavelength range consist of several separate hyperspectral imagers and the system weight is from 5 to 15 kg. In addition the cost of the Short Wave Infrared (SWIR) cameras is high (  50 k€). VTT has previously developed compact hyperspectral imagers for drones and Cubesats for Visible and Very near Infrared (VNIR) spectral ranges (Saari et al., 2013, Mannila et al., 2013, Näsilä et al., 2016). Recently VTT has started to develop a hyperspectral imaging system that will enable imaging simultaneously in the Visible, VNIR, and SWIR spectral bands. The system can be operated from a drone, on a camera stand, or attached to a tractor. The targeted main applications of the DroneKnowledge hyperspectral system are grass, peas, and cereals. In this paper the characteristics of the built system are shortly described. The system was used for spectral measurements of wheat, several grass species and pea plants fixed to the camera mount in the test fields in Southern Finland and in the green house. The wheat, grass and pea field measurements were also carried out using the system mounted on the tractor. The work is part of the Finnish nationally funded DroneKnowledge - Towards knowledge based export of small UAS remote sensing technology project.

  17. The Adaptive Optics Lucky Imager: Diffraction limited imaging at visible wavelengths with large ground-based telescopes

    NASA Astrophysics Data System (ADS)

    Crass, Jonathan; Mackay, Craig; King, David; Rebolo-López, Rafael; Labadie, Lucas; Puga, Marta; Oscoz, Alejandro; González Escalera, Victor; Pérez Garrido, Antonio; López, Roberto; Pérez-Prieto, Jorge; Rodríguez-Ramos, Luis; Velasco, Sergio; Villó, Isidro

    2015-01-01

    One of the continuing challenges facing astronomers today is the need to obtain ever higher resolution images of the sky. Whether studying nearby crowded fields or distant objects, with increased resolution comes the ability to probe systems in more detail and advance our understanding of the Universe. Obtaining these high-resolution images at visible wavelengths however has previously been limited to the Hubble Space Telescope (HST) due to atmospheric effects limiting the spatial resolution of ground-based telescopes to a fraction of their potential. With HST now having a finite lifespan, it is prudent to investigate other techniques capable of providing these kind of observations from the ground. Maintaining this capability is one of the goals of the Adaptive Optics Lucky Imager (AOLI).Achieving the highest resolutions requires the largest telescope apertures, however, this comes at the cost of increased atmospheric distortion. To overcome these atmospheric effects, there are two main techniques employed today: adaptive optics (AO) and lucky imaging. These techniques individually are unable to provide diffraction limited imaging in the visible on large ground-based telescopes; AO currently only works at infrared wavelengths while lucky imaging reduces in effectiveness on telescopes greater than 2.5 metres in diameter. The limitations of both techniques can be overcome by combing them together to provide diffraction limited imaging at visible wavelengths on the ground.The Adaptive Optics Lucky Imager is being developed as a European collaboration and combines AO and lucky imaging in a dedicated instrument for the first time. Initially for use on the 4.2 metre William Herschel Telescope, AOLI uses a low-order adaptive optics system to reduce the effects of atmospheric turbulence before imaging with a lucky imaging based science detector. The AO system employs a novel type of wavefront sensor, the non-linear Curvature Wavefront Sensor (nlCWFS) which provides significant sky-coverage using natural guide-stars alone.Here we present an overview of the instrument design, results from the first on-sky and laboratory testing and on-going development work of the instrument and its adaptive optics system.

  18. Development of ultraviolet- and visible-light one-shot spectral domain optical coherence tomography and in situ measurements of human skin

    NASA Astrophysics Data System (ADS)

    Hirayama, Heijiro; Nakamura, Sohichiro

    2015-07-01

    We have developed ultraviolet (UV)- and visible-light one-shot spectral domain (SD) optical coherence tomography (OCT) that enables in situ imaging of human skin with an arbitrary wavelength in the UV-visible-light region (370-800 nm). We alleviated the computational burden for each color OCT image by physically dispersing the irradiating light with a color filter. The system consists of SD-OCT with multicylindrical lenses; thus, mechanical scanning of the mirror or stage is unnecessary to obtain an OCT image. Therefore, only a few dozens of milliseconds are necessary to obtain single-image data. We acquired OCT images of one subject's skin in vivo and of a skin excision ex vivo for red (R, 650±20 nm), green (G, 550±20 nm), blue (B, 450±20 nm), and UV (397±5 nm) light. In the visible-light spectrum, R light penetrated the skin and was reflected at a lower depth than G or B light. On the skin excision, we demonstrated that UV light reached the dermal layer. We anticipated that basic knowledge about the spectral properties of human skin in the depth direction could be acquired with this system.

  19. Development of ultraviolet- and visible-light one-shot spectral domain optical coherence tomography and in situ measurements of human skin.

    PubMed

    Hirayama, Heijiro; Nakamura, Sohichiro

    2015-07-01

    We have developed ultraviolet (UV)- and visible-light one-shot spectral domain (SD) optical coherence tomography (OCT) that enables in situ imaging of human skin with an arbitrary wavelength in the UV-visible-light region (370-800 nm). We alleviated the computational burden for each color OCT image by physically dispersing the irradiating light with a color filter. The system consists of SD-OCT with multicylindrical lenses; thus, mechanical scanning of the mirror or stage is unnecessary to obtain an OCT image. Therefore, only a few dozens of milliseconds are necessary to obtain single-image data. We acquired OCT images of one subject's skin in vivo and of a skin excision ex vivo for red (R, 650 ± 20 nm), green (G, 550 ± 20 nm), blue (B, 450 ± 20 nm), and UV (397 ± 5 nm) light. In the visible-light spectrum, R light penetrated the skin and was reflected at a lower depth than G or B light. On the skin excision, we demonstrated that UV light reached the dermal layer. We anticipated that basic knowledge about the spectral properties of human skin in the depth direction could be acquired with this system.

  20. International Space Station from Space Shuttle Endeavour

    NASA Technical Reports Server (NTRS)

    2007-01-01

    The crew of the Space Shuttle Endeavour took this spectacular image of the International Space Station during the STS118 mission, August 8-21, 2007. The image was acquired by an astronaut through one of the crew cabin windows, looking back over the length of the Shuttle. This oblique (looking at an angle from vertical, rather than straight down towards the Earth) image was acquired almost one hour after late inspection activities had begun. The sensor head of the Orbiter Boom Sensor System is visible at image top left. The entire Space Station is visible at image bottom center, set against the backdrop of the Ionian Sea approximately 330 kilometers below it. Other visible features of the southeastern Mediterranean region include the toe and heel of Italy's 'boot' at image lower left, and the western coastlines of Albania and Greece, which extend across image center. Farther towards the horizon, the Aegean and Black Seas are also visible. Featured astronaut photograph STS118-E-9469 was acquired by the STS-118 crew on August 19, 2007, with a Kodak 760C digital camera using a 28 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science and Analysis Laboratory at Johnson Space Center.

  1. Debris Disk Dust Characterization through Spectral Types: Deep Visible-Light Imaging of Nine Systems

    NASA Astrophysics Data System (ADS)

    Choquet, Elodie

    2017-08-01

    We propose STIS coronagraphy of 9 debris disks recently seen in the near-infrared from our re-analysis of archival NICMOS data. STIS coronagraphy will provide complementary visible-light images that will let us characterize the disk colors needed to place constraints on dust grain sizes, albedos, and anisotropy of scattering of these disks. With 3 times finer angular resolution and much better sensitivity, our STIS images will dramatically surpass the NICMOS discovery images, and will more clearly reveal disk local structures, cleared inner regions, and test for large-scale asymmetries in the dust distributions possibly triggered by associated planets in these systems. The exquisite sensitivity to visible-light scattering by submicron particles uniquely offered by STIS coronagraphy will let us detect and spatially characterize the diffuse halo of dust blown out of the systems by the host star radiative pressure. Our sample includes disks around 3 low-mass stars, 3 solar-type stars, and 3 massive A stars; together with our STIS+NICMOS imaging of 6 additional disks around F and G stars, our sample covers the full range of spectral types and will let us perform a comparative study of dust distribution properties as a function of stellar mass and luminosity. Our sample makes up more than 1/3 of all debris disks imaged in scattered light to date, and will offer the first homogeneous characterization of the visible-light to near-IR properties of debris disk systems over a large range of spectral types. Our program will let us analyze how the dynamical balance is affected by initial conditions and star properties, and how it may be perturbed by gas drag or planet perturbations.

  2. Image fusion

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.

  3. Target discrimination of man-made objects using passive polarimetric signatures acquired in the visible and infrared spectral bands

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Breton, Mélanie; Fournier, Georges; Charette, Jean-François; Pichette, Mario; Rivet, Vincent; Bernier, Anne-Pier

    2011-10-01

    Surveillance operations and search and rescue missions regularly exploit electro-optic imaging systems to detect targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optic imaging systems, it is possible to increase their target discrimination capabilities, considering that man-made objects are known to depolarized light in different manner than natural backgrounds. As it is known that electro-magnetic radiation emitted and reflected from a smooth surface observed near a grazing angle becomes partially polarized in the visible and infrared wavelength bands, additional information about the shape, roughness, shading, and surface temperatures of difficult targets can be extracted by processing effectively such reflected/emitted polarized signatures. This paper presents a set of polarimetric image processing algorithms devised to extract meaningful information from a broad range of man-made objects. Passive polarimetric signatures are acquired in the visible, shortwave infrared, midwave infrared, and longwave infrared bands using a fully automated imaging system developed at DRDC Valcartier. A fusion algorithm is used to enable the discrimination of some objects lying in shadowed areas. Performance metrics, derived from the computed Stokes parameters, characterize the degree of polarization of man-made objects. Field experiments conducted during winter and summer time demonstrate: 1) the utility of the imaging system to collect polarized signatures of different objects in the visible and infrared spectral bands, and 2) the enhanced performance of target discrimination and fusion algorithms to exploit the polarized signatures of man-made objects against cluttered backgrounds.

  4. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.

    PubMed

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-03-23

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.

  5. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    PubMed Central

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-01-01

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690

  6. DM/LCWFC based adaptive optics system for large aperture telescopes imaging from visible to infrared waveband.

    PubMed

    Sun, Fei; Cao, Zhaoliang; Wang, Yukun; Zhang, Caihua; Zhang, Xingyun; Liu, Yong; Mu, Quanquan; Xuan, Li

    2016-11-28

    Almost all the deformable mirror (DM) based adaptive optics systems (AOSs) used on large aperture telescopes work at the infrared waveband due to the limitation of the number of actuators. To extend the imaging waveband to the visible, we propose a DM and Liquid crystal wavefront corrector (DM/LCWFC) combination AOS. The LCWFC is used to correct the high frequency aberration corresponding to the visible waveband and the aberrations of the infrared are corrected by the DM. The calculated results show that, to a 10 m telescope, DM/LCWFC AOS which contains a 1538 actuators DM and a 404 × 404 pixels LCWFC is equivalent to a DM based AOS with 4057 actuators. It indicates that the DM/LCWFC AOS is possible to work from visible to infrared for larger aperture telescopes. The simulations and laboratory experiment are performed for a 2 m telescope. The experimental results show that, after correction, near diffraction limited resolution USAF target images are obtained at the wavebands of 0.7-0.9 μm, 0.9-1.5 μm and 1.5-1.7 μm respectively. Therefore, the DM/LCWFC AOS may be used to extend imaging waveband of larger aperture telescope to the visible. It is very appropriate for the observation of spatial objects and the scientific research in astronomy.

  7. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.

    PubMed

    Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A

    2014-11-01

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.

  8. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D

    DOE PAGES

    Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...

    2014-08-26

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less

  9. [Techniques for pixel response nonuniformity correction of CCD in interferential imaging spectrometer].

    PubMed

    Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo

    2010-06-01

    Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging

  10. Modular wide spectrum lighting system for diagnosis, conservation, and restoration

    NASA Astrophysics Data System (ADS)

    Miccoli, Matteo; Melis, Marcello

    2013-05-01

    In the framework of imaging, lighting systems have always played a key role due to the primary importance of both the uniformity of the illumination and the richness of the emitted spectra. Multispectral imaging, i.e. imaging systems working inside and outside the visible wavelength range, are even more demanding and require to pay further attention to a number of parameters characterizing the lighting system. A critical issue for lighting systems, even in the visible light, is the shape of the emitted spectra and (only in the visible range) the Color Rendering Index. The color we perceive from a surface is our eyes' interpretation of the linear spectral combination of the illuminant spectrum and the surface spectral reflectance. If there is a lack of energy in a portion of the visible spectrum, that portion will turn into black to our eyes (and to whatever instrument) regardless the actual reflectance of the surface. In other words a lack in the exciting energy hides part of the spectral reflectance of the observed subject. Furthermore, the wider is the investigated spectrum, the fewer are the sources of light able to cover such a range. In this paper we show how we solved both the problems of the not uniformity of the light beam, independently on the incident angle, and of the selection of a light source with energy rich and continuous enough emitted spectrum.

  11. Archeological treasures protection based on early forest wildfire multi-band imaging detection system

    NASA Astrophysics Data System (ADS)

    Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.

    2012-10-01

    Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.

  12. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  13. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  14. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    NASA Astrophysics Data System (ADS)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  15. The ExtraSolar Planetary Imaging Coronagraph

    NASA Astrophysics Data System (ADS)

    Clampin, M.; Lyon, R.

    2010-10-01

    The Extrasolar Planetary Imaging Coronagraph (EPIC) is a 1.65-m telescope employing a visible nulling coronagraph (VNC) to deliver high-contrast images of extrasolar system architectures. EPIC will survey the architectures of exosolar systems, and investigate the physical nature of planets in these solar systems. EPIC will employ a Visible Nulling Coronagraph (VNC), featuring an inner working angle of ≤2λ/D, and offers the ideal balance between performance and feasibility of implementation, while not sacrificing science return. The VNC does not demand unrealistic thermal stability from its telescope optics, achieving its primary mirror surface figure requires no new technology, and pointing stability is within state of the art. The EPIC mission will be launched into a drift-away orbit with a five-year mission lifetime.

  16. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  17. Visualization of human inner ear anatomy with high-resolution MR imaging at 7T: initial clinical assessment.

    PubMed

    van der Jagt, M A; Brink, W M; Versluis, M J; Steens, S C A; Briaire, J J; Webb, A G; Frijns, J H M; Verbist, B M

    2015-02-01

    In many centers, MR imaging of the inner ear and auditory pathway performed on 1.5T or 3T systems is part of the preoperative work-up of cochlear implants. We investigated the applicability of clinical inner ear MR imaging at 7T and compared the visibility of inner ear structures and nerves within the internal auditory canal with images acquired at 3T. Thirteen patients with sensorineural hearing loss eligible for cochlear implantation underwent examinations on 3T and 7T scanners. Two experienced head and neck radiologists evaluated the 52 inner ear datasets. Twenty-four anatomic structures of the inner ear and 1 overall score for image quality were assessed by using a 4-point grading scale for the degree of visibility. The visibility of 11 of the 24 anatomic structures was rated higher on the 7T images. There was no significant difference in the visibility of 13 anatomic structures and the overall quality rating. A higher incidence of artifacts was observed in the 7T images. The gain in SNR at 7T yielded a more detailed visualization of many anatomic structures, especially delicate ones, despite the challenges accompanying MR imaging at a high magnetic field. © 2015 by American Journal of Neuroradiology.

  18. A portable near-infrared fluorescence image overlay device for surgical navigation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McWade, Melanie A.

    2016-03-01

    A rise in the use of near-infrared (NIR) fluorescent dyes or intrinsic fluorescent markers for surgical guidance and tissue diagnosis has triggered the development of NIR fluorescence imaging systems. Because NIR wavelengths are invisible to the naked eye, instrumentation must allow surgeons to visualize areas of high fluorescence. Current NIR fluorescence imaging systems have limited ease-of-use because they display fluorescent information on remote display monitors that require surgeons to divert attention away from the patient to identify the location of tissue fluorescence. Furthermore, some systems lack simultaneous visible light imaging which provides valuable spatial context to fluorescence images. We have developed a novel, portable NIR fluorescence imaging approach for intraoperative surgical guidance that provides information for surgical navigation within the clinician's line of sight. The system utilizes a NIR CMOS detector to collect excited NIR fluorescence from the surgical field. Tissues with NIR fluorescence are overlaid with visible light to provide information on tissue margins directly on the surgical field. In vitro studies have shown this versatile imaging system can be applied to applications with both extrinsic NIR contrast agents such as indocyanine green and weaker sources of biological fluorescence such as parathyroid gland tissue. This non-invasive, portable NIR fluorescence imaging system overlays an image directly on tissue, potentially allowing surgical decisions to be made quicker and with greater ease-of-use than current NIR fluorescence imaging systems.

  19. Tunable optical coherence tomography in the infrared range using visible photons

    NASA Astrophysics Data System (ADS)

    Paterova, Anna V.; Yang, Hongzhi; An, Chengwu; Kalashnikov, Dmitry A.; Krivitsky, Leonid A.

    2018-04-01

    Optical coherence tomography (OCT) is an appealing technique for bio-imaging, medicine, and material analysis. For many applications, OCT in mid- and far-infrared (IR) leads to significantly more accurate results. Reported mid-IR OCT systems require light sources and photodetectors which operate in mid-IR range. These devices are expensive and need cryogenic cooling. Here, we report a proof-of-concept demonstration of a wavelength tunable IR OCT technique with detection of only visible range photons. Our method is based on the nonlinear interference of frequency correlated photon pairs. The nonlinear crystal, introduced in the Michelson-type interferometer, generates photon pairs with one photon in the visible and another in the IR range. The intensity of detected visible photons depends on the phase and loss of IR photons, which interact with the sample under study. This enables us to characterize sample properties and perform imaging in the IR range by detecting visible photons. The technique possesses broad wavelength tunability and yields a fair axial and lateral resolution, which can be tailored to the specific application. The work contributes to the development of versatile 3D imaging and material characterization systems working in a broad range of IR wavelengths, which do not require the use of IR-range light sources and photodetectors.

  20. Baby Picture of our Solar System

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] Click on image for Poster VersionClick on image for Visible Light ImageClick on image for Animation

    A rare, infrared view of a developing star and its flaring jets taken by NASA's Spitzer Space Telescope shows us what our own solar system might have looked like billions of years ago. In visible light, this star and its surrounding regions are completely hidden in darkness.

    Stars form out of spinning clouds, or envelopes, of gas and dust. As the envelopes flatten and collapse, jets of gas stream outward and a swirling disk of planet-forming material takes shape around the forming star. Eventually, the envelope and jets disappear, leaving a newborn star with a suite of planets. This process takes millions of years.

    The Spitzer image shows a developing sun-like star, called L1157, that is only thousands of years old (for comparison, our solar system is around 4.5 billion years old). Why is the young system only visible in infrared light? The answer has to do with the fact that stars are born in the darkest and dustiest corners of space, where little visible light can escape. But the heat, or infrared light, of an object can be detected through the dust.

    In Spitzer's infrared view of L1157, the star itself is hidden but its envelope is visible in silhouette as a thick black bar. While Spitzer can peer through this region's dust, it cannot penetrate the envelope itself. Hence, the envelope appears black. The thickest part of the envelope can be seen as the black line crossing the giant jets. This L1157 portrait provides the first clear look at stellar envelope that has begun to flatten.

    The color white shows the hottest parts of the jets, with temperatures around 100 degrees Celsius (212 degrees Fahrenheit). Most of the material in the jets, seen in orange, is roughly zero degrees on the Celsius and Fahrenheit scales.

    The reddish haze all around the picture is dust. The white dots are other stars, mostly in the background.

    L1157 is located 800 light-years away in the constellation Cepheus.

    This image was taken by Spitzer's infrared array camera. Infrared light of 8 microns is colored red; 4.5-micron infrared light is green; and 3.6-micron infrared light is blue.

    The visible-light picture is from the Palomar Observatory-Space Telescope Science Institute Digitized Sky Survey. Blue visible light is blue; red visible light is green, and near-infrared light is red.

    The artist's animation begins by showing a dark and dusty corner of space where little visible light can escape. The animation then transitions to the infrared view taken by NASA's Spitzer Space Telescope, revealing the embryonic star and its dramatic jets.

  1. SU-C-209-02: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Clinical Patient Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhou, S; Cai, W; Hurwitz, M

    Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less

  2. The VAMPIRES instrument: imaging the innermost regions of protoplanetary discs with polarimetric interferometry

    NASA Astrophysics Data System (ADS)

    Norris, Barnaby; Schworer, Guillaume; Tuthill, Peter; Jovanovic, Nemanja; Guyon, Olivier; Stewart, Paul; Martinache, Frantz

    2015-03-01

    Direct imaging of protoplanetary discs promises to provide key insight into the complex sequence of processes by which planets are formed. However, imaging the innermost region of such discs (a zone critical to planet formation) is challenging for traditional observational techniques (such as near-IR imaging and coronagraphy) due to the relatively long wavelengths involved and the area occulted by the coronagraphic mask. Here, we introduce a new instrument - Visible Aperture-Masking Polarimetric Interferometer for Resolving Exoplanetary Signatures (VAMPIRES) - which combines non-redundant aperture-masking interferometry with differential polarimetry to directly image this previously inaccessible innermost region. By using the polarization of light scattered by dust in the disc to provide precise differential calibration of interferometric visibilities and closure phases, VAMPIRES allows direct imaging at and beyond the telescope diffraction limit. Integrated into the SCExAO (Subaru Coronagraphic Extreme Adaptive Optics) system at the Subaru telescope, VAMPIRES operates at visible wavelengths (where polarization is high) while allowing simultaneous infrared observations conducted by HICIAO. Here, we describe the instrumental design and unique observing technique and present the results of the first on-sky commissioning observations, validating the excellent visibility and closure-phase precision which are then used to project expected science performance metrics.

  3. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) spectrometer design and performance

    NASA Technical Reports Server (NTRS)

    Macenka, Steven A.; Chrisp, Michael P.

    1987-01-01

    The development of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has been completed at JPL. This paper outlines the functional requirements of the spectrometer optics subsystem, and describes the spectrometer optical design. The optical subsystem performance is shown in terms of spectral modulation transfer functions, radial energy distributions, and system transmission at selected wavelengths for the four spectrometers. An outline of the spectrometer alignment is included.

  4. Face recognition in the thermal infrared domain

    NASA Astrophysics Data System (ADS)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  6. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  7. Contrast enhancement for in vivo visible reflectance imaging of tissue oxygenation.

    PubMed

    Crane, Nicole J; Schultz, Zachary D; Levin, Ira W

    2007-08-01

    Results are presented illustrating a straightforward algorithm to be used for real-time monitoring of oxygenation levels in blood cells and tissue based on the visible spectrum of hemoglobin. Absorbance images obtained from the visible reflection of white light through separate red and blue bandpass filters recorded by monochrome charge-coupled devices (CCDs) are combined to create enhanced images that suggest a quantitative correlation between the degree of oxygenated and deoxygenated hemoglobin in red blood cells. The filter bandpass regions are chosen specifically to mimic the color response of commercial 3-CCD cameras, representative of detectors with which the operating room laparoscopic tower systems are equipped. Adaptation of this filter approach is demonstrated for laparoscopic donor nephrectomies in which images are analyzed in terms of real-time in vivo monitoring of tissue oxygenation.

  8. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  9. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  10. 21 CFR 892.1630 - Electrostatic x-ray imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Electrostatic x-ray imaging system. 892.1630 Section 892.1630 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... visible image. This generic type of device may include signal analysis and display equipment, patient and...

  11. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  12. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  13. UGS video target detection and discrimination

    NASA Astrophysics Data System (ADS)

    Roberts, G. Marlon; Fitzgerald, James; McCormack, Michael; Steadman, Robert; Vitale, Joseph D.

    2007-04-01

    This project focuses on developing electro-optic algorithms which rank images by their likelihood of containing vehicles and people. These algorithms have been applied to images obtained from Textron's Terrain Commander 2 (TC2) Unattended Ground Sensor system. The TC2 is a multi-sensor surveillance system used in military applications. It combines infrared, acoustic, seismic, magnetic, and electro-optic sensors to detect nearby targets. When targets are detected by the seismic and acoustic sensors, the system is triggered and images are taken in the visible and infrared spectrum. The original Terrain Commander system occasionally captured and transmitted an excessive number of images, sometimes triggered by undesirable targets such as swaying trees. This wasted communications bandwidth, increased power consumption, and resulted in a large amount of end-user time being spent evaluating unimportant images. The algorithms discussed here help alleviate these problems. These algorithms are currently optimized for infra-red images, which give the best visibility in a wide range of environments, but could be adapted to visible imagery as well. It is important that the algorithms be robust, with minimal dependency on user input. They should be effective when tracking varying numbers of targets of different sizes and orientations, despite the low resolutions of the images used. Most importantly, the algorithms must be appropriate for implementation on a low-power processor in real time. This would enable us to maintain frame rates of 2 Hz for effective surveillance operations. Throughout our project we have implemented several algorithms, and used an appropriate methodology to quantitatively compare their performance. They are discussed in this paper.

  14. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  15. Observations of the earth using nighttime visible imagery

    NASA Technical Reports Server (NTRS)

    Foster, J. L.

    1983-01-01

    The earth as viewed from space in visible light at night reveals some features not easily discernible during the day such as aurora, forest fires, city lights and gas flares. In addition, those features having a high albedo such as snow and ice can be identified on many moonlit nights nearly as well as they can in sunlight. The Air Force DMSP satellites have been operating in the visible wavelengths at night since the mid 1960s. Most all other satellites having optical sensors are incapable of imaging at night. Imaging systems having improved light sensitivity in the visible portion of the spectrum should be considered when planning future earth resources satellite missions in order to utilize nighttime as well as daytime visual observations.

  16. DETECTION AND IDENTIFICATION OF TOXIC AIR POLLUTANTS USING FIELD PORTABLE AND AIRBORNE REMOTE IMAGING SYSTEMS

    EPA Science Inventory

    Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...

  17. Polarization-difference imaging: a biologically inspired technique for observation through scattering media

    NASA Astrophysics Data System (ADS)

    Rowe, M. P.; Pugh, E. N., Jr.; Tyo, J. S.; Engheta, N.

    1995-03-01

    Many animals have visual systems that exploit the polarization of light, and some of these systems are thought to compute difference signals in parallel from arrays of photoreceptors optimally tuned to orthogonal polarizations. We hypothesize that such polarization-difference systems can improve the visibility of objects in scattering media by serving as common-mode rejection amplifiers that reduce the effects of background scattering and amplify the signal from targets whose polarization-difference magnitude is distinct from the background. We present experimental results obtained with a target in a highly scattering medium, demonstrating that a manmade polarization-difference system can render readily visible surface features invisible to conventional imaging.

  18. High Contrast Imaging in the Visible: First Experimental Results at the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Pedichini, F.; Stangalini, M.; Ambrosino, F.; Puglisi, A.; Pinna, E.; Bailey, V.; Carbonaro, L.; Centrone, M.; Christou, J.; Esposito, S.; Farinato, J.; Fiore, F.; Giallongo, E.; Hill, J. M.; Hinz, P. M.; Sabatini, L.

    2017-08-01

    In 2014 February, the System for High contrast And coronography from R to K at VISual bands (SHARK-VIS) Forerunner, a high contrast experimental imager operating at visible wavelengths, was installed at the Large Binocular Telescope (LBT). Here we report on the first results obtained by recent on-sky tests. These results show the extremely good performance of the LBT Extreme Adaptive Optics (ExAO) system at visible wavelengths, both in terms of spatial resolution and contrast achieved. Similarly to what was done by Amara & Quanz (2012), we used the SHARK-VIS Forerunner data to quantitatively assess the contrast enhancement. This is done by injecting several different synthetic faint objects in the acquired data and applying the angular differential imaging (ADI) technique. A contrast of the order of 5 × 10-5 is obtained at 630 nm for angular separations from the star larger than 100 mas. These results are discussed in light of the future development of SHARK-VIS and compared to those obtained by other high contrast imagers operating at similar wavelengths.

  19. Structural and functional human retinal imaging with a fiber-based visible light OCT ophthalmoscope (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chong, Shau Poh; Bernucci, Marcel T.; Borycki, Dawid; Radhakrishnan, Harsha; Srinivasan, Vivek J.

    2017-02-01

    Visible light is absorbed by intrinsic chromophores such as photopigment, melanin, and hemoglobin, and scattered by subcellular structures, all of which are potential retinal disease biomarkers. Recently, high-resolution quantitative measurement and mapping of hemoglobin concentrations was demonstrated using visible light Optical Coherence Tomography (OCT). Yet, most high-resolution visible light OCT systems adopt free-space, or bulk, optical setups, which could limit clinical applications. Here, the construction of a multi-functional fiber-optic OCT system for human retinal imaging with <2.5 micron axial resolution is described. A detailed noise characterization of two supercontinuum light sources with differing pulse repetition rates is presented. The higher repetition rate, lower noise, source is found to enable a sensitivity of 87 dB with 0.1 mW incident power at the cornea and a 98 microsecond exposure time. Using a broadband, asymmetric, fused single-mode fiber coupler designed for visible wavelengths, the sample arm is integrated into an ophthalmoscope platform, rendering it portable and suitable for clinical use. In vivo anatomical, Doppler, and spectroscopic imaging of the human retina is further demonstrated using a single oversampled B-scan. For spectroscopic fitting of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) content in the retinal vessels, a noise bias-corrected absorbance spectrum is estimated using a sliding short-time Fourier transform of the complex OCT signal and fit using a model of light absorption and scattering. This yielded path length (L) times molar concentration, LCHbO2 and LCHb. Based on these results, we conclude that high-resolution visible light OCT has potential for depth-resolved functional imaging of the eye.

  20. JView Visualization for Next Generation Air Transportation System

    DTIC Science & Technology

    2011-01-01

    hardware graphics acceleration. JView relies on concrete Object Oriented Design (OOD) and programming techniques to provide a robust and venue non...visibility priority of a texture set. A good example of this is you have translucent images that should always be visible over the other textures...elements present in the scene. • Capture Alpha. Allows the alpha color channel ( translucency ) to be saved when capturing images or movies of a 3D scene

  1. Characteristics and performance of a micro-MOSFET: an "imageable" dosimeter for image-guided radiotherapy.

    PubMed

    Rowbottoma, Carl G; Jaffray, David A

    2004-03-01

    The performance and characteristics of a miniature metal oxide semiconductor field effect transistor (micro-MOSFET) detector was investigated for its potential application to integral system tests for image-guided radiotherapy. In particular, the position of peak response to a slit of radiation was determined for the three principal axes to define the co-ordinates for the center of the active volume of the detector. This was compared to the radiographically determined center of the micro-MOSFET visible using cone-beam CT. Additionally, the angular sensitivity of the micro-MOSFET was measured. The micro-MOSFETs are clearly visible on the cone-beam CT images, and produce no artifacts. The center of the active volume of the micro-MOSFET aligned with the center of the visible micro-MOSFET on the cone-beam CT images for the x and y axes to within 0.20 mm and 0.15 mm, respectively. In z, the long axis of the detector, the peak response was found to be 0.79 mm from the tip of the visible micro-MOSFET. Repeat experiments verified that the position of the peak response of the micro-MOSFET was reproducible. The micro-MOSFET response for 360 degrees of rotation in the axial plane to the micro-MOSFET was +/-2%, consistent with values quoted by the manufacturer. The location of the active volume of the micro-MOSFETs under investigation can be determined from the centroid of the visible micro-MOSFET on cone-beam CT images. The CT centroid position corresponds closely to the center of the detector response to radiation. The ability to use the cone-beam CT to locate the active volume to within 0.20 mm allows their use in an integral system test for the imaging of and dose delivery to a phantom containing an array of micro-MOSFETs. The small angular sensitivity allows the investigation of noncoplanar beams.

  2. Application of off-line image processing for optimization in chest computed radiography using a low cost system.

    PubMed

    Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato

    2015-03-08

     The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.

  3. Application of off‐line image processing for optimization in chest computed radiography using a low cost system

    PubMed Central

    Msaki, Peter; Padovani, Renato

    2015-01-01

    The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165

  4. ARC-1989-A89-7015

    NASA Image and Video Library

    1989-08-21

    Range : 4.8 million km. ( 3 million miles ) P-34648 This Voyager 2, sixty-one second exposure, shot through clear filters, of Neptunes rings. The Voyager cameras were programmed to make a systematic search of the entire ring system for new material. The previously ring arc is visible as a long bright streak at the bottom of the image. Extening beyond the bright arc is a much fainter component which follows the arc in its orbit. this faint material was also visible leading the ring arc and, in total, covers at least half of the orbit before it becomes too faint to identify. Also visible in this image, is a continuous ring of faint material previously identified as a possible ring arc by Voyager. this continuous ring is located just outside the orbit of the moon 1989N3, which was also discovered by Voyager. This moon is visible as a streak in the lower left. the smear of 1989N3 is due to its own orbital motion during the exposure. Extreme computer processing of this image was made to enhance the extremely faint features of Neptunes moon system. the dark area surrounding the moon as well as the bright corners are due to this special processing.

  5. Binocular Multispectral Adaptive Imaging System (BMAIS)

    DTIC Science & Technology

    2010-07-26

    system for pilots that adaptively integrates shortwave infrared (SWIR), visible, near ‐IR (NIR), off‐head thermal, and computer symbology/imagery into...respective areas. BMAIS is a binocular helmet mounted imaging system that features dual shortwave infrared (SWIR) cameras, embedded image processors and...algorithms and fusion of other sensor sites such as forward looking infrared (FLIR) and other aircraft subsystems. BMAIS is attached to the helmet

  6. Cerberus Fossae

    NASA Image and Video Library

    2014-01-24

    The fractures in this image are part of a large system of fractures called Cerberus Fossae. Athabasca Valles is visible in the lower right corner of the image as seen by NASA 2001 Mars Odyssey spacecraft.

  7. 21 CFR 892.1660 - Non-image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Non-image-intensified fluoroscopic x-ray system. 892.1660 Section 892.1660 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... of x-radiation into a visible image. This generic type of device may include signal analysis and...

  8. 21 CFR 892.1660 - Non-image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Non-image-intensified fluoroscopic x-ray system. 892.1660 Section 892.1660 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... of x-radiation into a visible image. This generic type of device may include signal analysis and...

  9. 21 CFR 892.1660 - Non-image-intensified fluoroscopic x-ray system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Non-image-intensified fluoroscopic x-ray system. 892.1660 Section 892.1660 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... of x-radiation into a visible image. This generic type of device may include signal analysis and...

  10. Infrared Sky Imager (IRSI) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Victor R.

    2016-04-01

    The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less

  11. The PALM-3000 high-order adaptive optics system for Palomar Observatory

    NASA Astrophysics Data System (ADS)

    Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa

    2008-07-01

    Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.

  12. X-Ray Imaging System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The FluoroScan Imaging System is a high resolution, low radiation device for viewing stationary or moving objects. It resulted from NASA technology developed for x-ray astronomy and Goddard application to a low intensity x-ray imaging scope. FlouroScan Imaging Systems, Inc, (formerly HealthMate, Inc.), a NASA licensee, further refined the FluoroScan System. It is used for examining fractures, placement of catheters, and in veterinary medicine. Its major components include an x-ray generator, scintillator, visible light image intensifier and video display. It is small, light and maneuverable.

  13. [Development of a Surgical Navigation System with Beam Split and Fusion of the Visible and Near-Infrared Fluorescence].

    PubMed

    Yang, Xiaofeng; Wu, Wei; Wang, Guoan

    2015-04-01

    This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.

  14. VLC-based indoor location awareness using LED light and image sensors

    NASA Astrophysics Data System (ADS)

    Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.

  15. Remote sensing of multiple vital signs using a CMOS camera-equipped infrared thermography system and its clinical application in rapidly screening patients with suspected infectious diseases.

    PubMed

    Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi

    2017-02-01

    Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. The Use of Gamma-Ray Imaging to Improve Portal Monitor Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Collins, Jeff; Fabris, Lorenzo

    2008-01-01

    We have constructed a prototype, rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. Our Roadside Tracker uses automated target acquisition and tracking (TAT) software to identify and track vehicles in visible light images. The field of view of the visible camera overlaps with and is calibrated to that of a one-dimensional gamma-ray imager. The TAT code passes information on when vehicles enter and exit the system field of view and when they cross gamma-ray pixel boundaries. Based on this in-formation, the gamma-ray imager "harvests"more » the gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. In this fashion we are able to generate vehicle-specific radiation signatures and avoid source confusion problems that plague nonimaging approaches to the same problem.« less

  17. HICO and RAIDS Experiment Payload - Hyperspectral Imager for the Coastal Ocean

    NASA Technical Reports Server (NTRS)

    Corson, Mike

    2009-01-01

    HICO and RAIDS Experiment Payload - Hyperspectral Imager For The Coastal Ocean (HREP-HICO) will operate a visible and near-infrared (VNIR) Maritime Hyperspectral Imaging (MHSI) system, to detect, identify and quantify coastal geophysical features from the International Space Station.

  18. The Airborne Visible / Infrared Imaging Spectrometer AVIS: Design, Characterization and Calibration.

    PubMed

    Oppelt, Natascha; Mauser, Wolfram

    2007-09-14

    The Airborne Visible / Infrared imaging Spectrometer AVIS is a hyperspectralimager designed for environmental monitoring purposes. The sensor, which wasconstructed entirely from commercially available components, has been successfullydeployed during several experiments between 1999 and 2007. We describe the instrumentdesign and present the results of laboratory characterization and calibration of the system'ssecond generation, AVIS-2, which is currently being operated. The processing of the datais described and examples of remote sensing reflectance data are presented.

  19. Iao: The New Adaptive Optics Visible Imaging and Photometric System for AEOS

    DTIC Science & Technology

    2008-09-01

    observations of binary stars, asteroids and planets such as Mercury and Mars [2,3,4]. The Visible Imager is also used to take time resolved photometry ...role it takes high spatial resolution imagery of resolved targets. These targets are primarily low Earth orbiting satellites acquired for the...albedo pattern: Comparing the AEOS and TES data sets [5] D.T. Hall et al. 2007, Journal of Spacecraft and Rockets, 44, 910-919, Time - Resolved I-Band

  20. SPICA, Stellar Parameters and Images with a Cophased Array: a 6T visible combiner for the CHARA array.

    PubMed

    Mourard, Denis; Bério, Philippe; Perraut, Karine; Clausse, Jean-Michel; Creevey, Orlagh; Martinod, Marc-Antoine; Meilland, Anthony; Millour, Florentin; Nardetto, Nicolas

    2017-05-01

    High angular resolution studies of stars in the optical domain have highly progressed in recent years. After the results obtained with the visible instrument Visible spEctroGraph and polArimeter (VEGA) on the Center for High Angular Resolution Astronomy (CHARA) array and the recent developments on adaptive optics and fibered interferometry, we have started the design and study of a new six-telescope visible combiner with single-mode fibers. It is designed as a low spectral resolution instrument for the measurement of the angular diameter of stars to make a major step forward in terms of magnitude and precision with respect to the present situation. For a large sample of bright stars, a medium spectral resolution mode will allow unprecedented spectral imaging of stellar surfaces and environments for higher accuracy on stellar/planetary parameters. To reach the ultimate performance of the instrument in terms of limiting magnitude (Rmag≃8 for diameter measurements and Rmag≃4 to 5 for imaging), Stellar Parameters and Images with a Cophased Array (SPICA) includes the development of a dedicated fringe tracking system in the H band to reach "long" (200 ms to 30 s) exposures of the fringe signal in the visible.

  1. Spectroscopic imaging of limiter heat and particle fluxes and the resulting impurity sources during Wendelstein 7-X startup plasmas.

    PubMed

    Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A

    2016-11-01

    A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .

  2. Gimbaled multispectral imaging system and method

    DOEpatents

    Brown, Kevin H.; Crollett, Seferino; Henson, Tammy D.; Napier, Matthew; Stromberg, Peter G.

    2016-01-26

    A gimbaled multispectral imaging system and method is described herein. In an general embodiment, the gimbaled multispectral imaging system has a cross support that defines a first gimbal axis and a second gimbal axis, wherein the cross support is rotatable about the first gimbal axis. The gimbaled multispectral imaging system comprises a telescope that fixed to an upper end of the cross support, such that rotation of the cross support about the first gimbal axis causes the tilt of the telescope to alter. The gimbaled multispectral imaging system includes optics that facilitate on-gimbal detection of visible light and off-gimbal detection of infrared light.

  3. [Perceptual sharpness metric for visible and infrared color fusion images].

    PubMed

    Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan

    2012-12-01

    For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.

  4. Comparison between visible/ NIR spectroscopy and hyperspectral imaging for detecting surface contaminants on poultry carcasses

    USDA-ARS?s Scientific Manuscript database

    The U. S. Department of Agriculture, Agricultural Research Service has been developing a method and system to detect fecal contamination on processed poultry carcasses with hyperspectral and multispectral imaging systems. The patented method utilizes a three step approach to contaminant detection. S...

  5. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    PubMed Central

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-01-01

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447

  6. 2001 Mars Odyssey Images Earth (Visible and Infrared)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) acquired these images of the Earth using its visible and infrared cameras as it left the Earth. The visible image shows the thin crescent viewed from Odyssey's perspective. The infrared image was acquired at exactly the same time, but shows the entire Earth using the infrared's 'night-vision' capability. Invisible light the instrument sees only reflected sunlight and therefore sees nothing on the night side of the planet. In infrared light the camera observes the light emitted by all regions of the Earth. The coldest ground temperatures seen correspond to the nighttime regions of Antarctica; the warmest temperatures occur in Australia. The low temperature in Antarctica is minus 50 degrees Celsius (minus 58 degrees Fahrenheit); the high temperature at night in Australia 9 degrees Celsius(48.2 degrees Fahrenheit). These temperatures agree remarkably well with observed temperatures of minus 63 degrees Celsius at Vostok Station in Antarctica, and 10 degrees Celsius in Australia. The images were taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19,2001 as the Odyssey spacecraft left Earth.

  7. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  8. Space shuttle visual simulation system design study

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The current and near-future state-of-the-art in visual simulation equipment technology is related to the requirements of the space shuttle visual system. Image source, image sensing, and displays are analyzed on a subsystem basis, and the principal conclusions are used in the formulation of a recommended baseline visual system. Perceptibility and visibility are also analyzed.

  9. Integrated infrared and visible image sensors

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2000-01-01

    Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.

  10. The eyes of LITENING

    NASA Astrophysics Data System (ADS)

    Moser, Eric K.

    2016-05-01

    LITENING is an airborne system-of-systems providing long-range imaging, targeting, situational awareness, target tracking, weapon guidance, and damage assessment, incorporating a laser designator and laser range finders, as well as non-thermal and thermal imaging systems, with multi-sensor boresight. Robust operation is at a premium, and subsystems are partitioned to modular, swappable line-replaceable-units (LRUs) and shop-replaceable-units (SRUs). This presentation will explore design concepts for sensing, data storage, and presentation of imagery associated with the LITENING targeting pod. The "eyes" of LITENING are the electro-optic sensors. Since the initial LITENING II introduction to the US market in the late 90s, as the program has evolved and matured, a series of spiral functional improvements and sensor upgrades have been incorporated. These include laser-illuminated imaging, and more recently, color sensing. While aircraft displays are outside of the LITENING system, updates to the available viewing modules have also driven change, and resulted in increasingly effective ways of utilizing the targeting system. One of the latest LITENING spiral upgrades adds a new capability to display and capture visible-band color imagery, using new sensors. This is an augmentation to the system's existing capabilities, which operate over a growing set of visible and invisible colors, infrared bands, and laser line wavelengths. A COTS visible-band camera solution using a CMOS sensor has been adapted to meet the particular needs associated with the airborne targeting use case.

  11. Non-destructive evaluation of bacteria-infected watermelon seeds using visible/near-infrared hyperspectral imaging.

    PubMed

    Lee, Hoonsoo; Kim, Moon S; Song, Yu-Rim; Oh, Chang-Sik; Lim, Hyoun-Sub; Lee, Wang-Hee; Kang, Jum-Soon; Cho, Byoung-Kwan

    2017-03-01

    There is a need to minimize economic damage by sorting infected seeds from healthy seeds before seeding. However, current methods of detecting infected seeds, such as seedling grow-out, enzyme-linked immunosorbent assays, the polymerase chain reaction (PCR) and the real-time PCR have a critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to evaluate the potential of visible/near-infrared (Vis/NIR) hyperspectral imaging system for detecting bacteria-infected watermelon seeds. A hyperspectral Vis/NIR reflectance imaging system (spectral region of 400-1000 nm) was constructed to obtain hyperspectral reflectance images for 336 bacteria-infected watermelon seeds, which were then subjected to partial least square discriminant analysis (PLS-DA) and a least-squares support vector machine (LS-SVM) to classify bacteria-infected watermelon seeds from healthy watermelon seeds. The developed system detected bacteria-infected watermelon seeds with an accuracy > 90% (PLS-DA: 91.7%, LS-SVM: 90.5%), suggesting that the Vis/NIR hyperspectral imaging system is effective for quarantining bacteria-infected watermelon seeds. The results of the present study show that it is possible to use the Vis/NIR hyperspectral imaging system for detecting bacteria-infected watermelon seeds. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. Use of cameras for monitoring visibility impairment

    NASA Astrophysics Data System (ADS)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  13. Three-Dimensional Cataract Crystalline Lens Imaging With Swept-Source Optical Coherence Tomography.

    PubMed

    de Castro, Alberto; Benito, Antonio; Manzanera, Silvestre; Mompeán, Juan; Cañizares, Belén; Martínez, David; Marín, Jose María; Grulkowski, Ireneusz; Artal, Pablo

    2018-02-01

    To image, describe, and characterize different features visible in the crystalline lens of older adults with and without cataract when imaged three-dimensionally with a swept-source optical coherence tomography (SS-OCT) system. We used a new SS-OCT laboratory prototype designed to enhance the visualization of the crystalline lens and imaged the entire anterior segment of both eyes in two groups of participants: patients scheduled to undergo cataract surgery, n = 17, age range 36 to 91 years old, and volunteers without visual complains, n = 14, age range 20 to 81 years old. Pre-cataract surgery patients were also clinically graded according to the Lens Opacification Classification System III. The three-dimensional location and shape of the visible opacities were compared with the clinical grading. Hypo- and hyperreflective features were visible in the lens of all pre-cataract surgery patients and in some of the older adults in the volunteer group. When the clinical examination revealed cortical or subcapsular cataracts, hyperreflective features were visible either in the cortex parallel to the surfaces of the lens or in the posterior pole. Other type of opacities that appeared as hyporeflective localized features were identified in the cortex of the lens. The OCT signal in the nucleus of the crystalline lens correlated with the nuclear cataract clinical grade. A dedicated OCT is a useful tool to study in vivo the subtle opacities in the cataractous crystalline lens, revealing its position and size three-dimensionally. The use of these images allows obtaining more detailed information on the age-related changes leading to cataract.

  14. Image secure transmission for optical orthogonal frequency-division multiplexing visible light communication systems using chaotic discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Zhang, Shaozhong; Chen, Fangni; Wu, Ming-Wei; Qiu, Weiwei

    2017-11-01

    A physical encryption scheme for orthogonal frequency-division multiplexing (OFDM) visible light communication (VLC) systems using chaotic discrete cosine transform (DCT) is proposed. In the scheme, the row of the DCT matrix is permutated by a scrambling sequence generated by a three-dimensional (3-D) Arnold chaos map. Furthermore, two scrambling sequences, which are also generated from a 3-D Arnold map, are employed to encrypt the real and imaginary parts of the transmitted OFDM signal before the chaotic DCT operation. The proposed scheme enhances the physical layer security and improves the bit error rate (BER) performance for OFDM-based VLC. The simulation results prove the efficiency of the proposed encryption method. The experimental results show that the proposed security scheme not only protects image data from eavesdroppers but also keeps the good BER and peak-to-average power ratio performances for image-based OFDM-VLC systems.

  15. The Visible Imaging System (VIS) for the Polar Spacecraft

    NASA Technical Reports Server (NTRS)

    Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.

    1995-01-01

    The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.

  16. Very Accurate Imaging of the Close Environment of Bright Objects in Visible and Near-Infrared

    NASA Astrophysics Data System (ADS)

    Mouillet, David; Beuzit, Jean-Luc; Chauvin, Gael; Lagrange, Anne-Marie

    The development of adaptive optics (AO) in near IR has demonstrated in the latest decade both its astronomical impact and its increasing importance with the development of larger telescopes. We emphasize that still better imaging capabilities would extend the wavelength range from near-IR to visible and would also enable to perform very high dynamic observations from the ground. Such a gain in performance is interesting for a large number of astrophysical topics: environment of young stellar objects, evolved stars, binary or multiple systems, planetary disks and low mass companions down to brown dwarves or hot planets. The specification of an instrument fulfilling such requirements could be focussed on high image quality on a narrow field around bright objects, so as to limit the cost and development timescale. Additionally, this facility could also be used (with the same specifications) to feed other future instruments (such as interferometers or high resolution spectrometers working in visible) and would be an important step in the general scheme of larger adaptive optics systems development.

  17. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  18. Deep HST/STIS Visible-Light Imaging of Debris Systems Around Solar Analog Hosts

    NASA Technical Reports Server (NTRS)

    Schneider, Glenn; Grady, Carol A.; Stark, Christopher C.; Gaspar, Andras; Carson, Joseph; Debes, John H.; Henning, Thomas; Hines, Dean C.; Jang-Condell, Hannah; Kuchner, Marc J.

    2016-01-01

    We present new Hubble Space Telescope observations of three a priori known starlight-scattering circumstellar debris systems (CDSs) viewed at intermediate inclinations around nearby close-solar analog stars: HD 207129, HD202628, and HD 202917. Each of these CDSs possesses ring-like components that are more massive analogs of our solar systems Edgeworth Kuiper Belt. These systems were chosen for follow-up observations to provide imaging with higher fidelity and better sensitivity for the sparse sample of solar-analog CDSs that range over two decades in systemic ages, with HD 202628 and HD 207129 (both approx. 2.3 Gyr) currently the oldest CDSs imaged in visible or near-IR light. These deep (10-14 ks) observations, made with six-roll point-spread-function template visible-light coronagraphy using the Space Telescope Imaging Spectrograph, were designed to better reveal their angularly large debris rings of diffuse low surface brightness, and for all targets probe their exo-ring environments for starlight-scattering materials that present observational challenges for current ground-based facilities and instruments. Contemporaneously also observing with a narrower occulter position, these observations additionally probe the CDS endo-ring environments that are seen to be relatively devoid of scatterers. We discuss the morphological, geometrical, and photometric properties of these CDSs also in the context of other CDSs hosted by FGK stars that we have previously imaged as a homogeneously observed ensemble. From this combined sample we report a general decay in quiescent-disk F disk /F star optical brightness approx. t( exp.-0.8), similar to what is seen at thermal IR wavelengths, and CDSs with a significant diversity in scattering phase asymmetries, and spatial distributions of their starlight-scattering grains.

  19. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.

    PubMed

    Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.

  20. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders

    PubMed Central

    Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724

  1. Overview of studies and developments in cinematography, optoelectronic imaging, and photonics at CEA/DIF

    NASA Astrophysics Data System (ADS)

    Mens, Alain; Alozy, Eric; Aubert, Damien; Benier, Jacky; Bourgade, Jean-Luc; Boutin, Jean-Yves; Brunel, Patrick; Charles, Gilbert; Chollet, Clement; Desbat, Laurent; Gontier, Dominique; Jacquet, Henri-Patrick; Jasmin, Serge; Le Breton, Jean-Pierre; Marchet, Bruno; Masclet-Gobin, Isabelle; Mercier, Patrick; Millier, Philippe; Missault, Carole; Negre, Jean-Paul; Paul, Serge; Rosol, Rodolphe; Sommerlinck, Thierry; Veaux, Jacqueline; Veron, Laurent; Vincent de Araujo, Manuel; Jaanimagi, Paul; Pien, Greg

    2003-07-01

    This paper gives an overview of works undertaken at CEA/DIF in high speed cinematography, optoelectronic imaging and ultrafast photonics for the needs of the CEA/DAM experimental programs. We have developed a new multichannel velocimeter, and a new probe for shock breakout timing measurements in detonics experiments. A brief description and a recall of their main performances will be made. We have implemented three new optoelectronic imaging systems, in order to observe dynamic scenes in the ranges of 50 - 100 keV and 4 MeV. These systems are described, their main specifications and performances are given. Then we describe our contribution to the ICF program: after recalling the specifications of LIL plasma diagnostics, we describe the features and performances of visible streak tubes, X-ray streak tubes, visible and X-ray framing cameras and the associated systems developed to match these specifications. At last we introduce the subject of components and systems vulnerability in the LMJ target area, the principles identified to mitigate this problem and the first results of studies (image relay, response of streak tube phosphors, MCP image intensifiers and CCDs to fusion neutrons) related to this subject. Results obtained so far are presented.

  2. Thermal-to-visible transducer (TVT) for thermal-IR imaging

    NASA Astrophysics Data System (ADS)

    Flusberg, Allen; Swartz, Stephen; Huff, Michael; Gross, Steven

    2008-04-01

    We have been developing a novel thermal-to-visible transducer (TVT), an uncooled thermal-IR imager that is based on a Fabry-Perot Interferometer (FPI). The FPI-based IR imager can convert a thermal-IR image to a video electronic image. IR radiation that is emitted by an object in the scene is imaged onto an IR-absorbing material that is located within an FPI. Temperature variations generated by the spatial variations in the IR image intensity cause variations in optical thickness, modulating the reflectivity seen by a probe laser beam. The reflected probe is imaged onto a visible array, producing a visible image of the IR scene. This technology can provide low-cost IR cameras with excellent sensitivity, low power consumption, and the potential for self-registered fusion of thermal-IR and visible images. We will describe characteristics of requisite pixelated arrays that we have fabricated.

  3. Performance Analysis of Visible Light Communication Using CMOS Sensors.

    PubMed

    Do, Trong-Hop; Yoo, Myungsik

    2016-02-29

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.

  4. Performance Analysis of Visible Light Communication Using CMOS Sensors

    PubMed Central

    Do, Trong-Hop; Yoo, Myungsik

    2016-01-01

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535

  5. Pattern recognition applied to infrared images for early alerts in fog

    NASA Astrophysics Data System (ADS)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  6. Post-focus Instrumentation Of The NST

    NASA Astrophysics Data System (ADS)

    Cao, Wenda; Gorceix, N.; Andic, A.; Ahn, K.; Coulter, R.; Goode, P.

    2009-05-01

    The NST (New Solar Telescope), 1.6 m clear aperture, off-axis telescope, is in its commissioning phase at Big Bear Solar Observatory (BBSO). It will be the most capable, largest aperture solar telescope in the US until the 4 m ATST (Advanced Technology Solar Telescope) comes on-line in the middle of the next decade. The NST will be outfitted with state-of-the-art post-focus instrumentation, which currently include Adaptive Optics system (AO), InfraRed Imaging Magnetograph (IRIM), Visible Imaging Magnetograph (VIM), Real-time Image Reconstruction System (RIRS), and Fast Imaging Solar Spectrograph (FISS). A 308 sub-aperture (349-actuator Deformable Mirror) AO system will enable diffraction limited observations over the NST's principal operating wavelengths from 0.4 µm through 1.7 µm. IRIM and VIM are Fabry-Perot based narrow-band tunable filter, which provide high resolution two-dimensional spectroscopic and polarimetric imaging in the near infrared and visible respectively. Using a 32-node parallel computing system, RIRS is capable of performing real-time image reconstruction with one image every minute. FISS is a collaboration between NJIT and Seoul National University to focus on chromosphere dynamics. This instruments would be installed this Summer as a part of the NST commissioning and the implementation of Nysmyth focus instrumentation. Key tasks including optical design, hardware/software integration and subsequent setup/testing on the NST, will be presented in this poster. First light images from the NST will be shown.

  7. Spectroscopic imaging of limiter heat and particle fluxes and the resulting impurity sources during Wendelstein 7-X startup plasmas

    DOE PAGES

    Stephey, L.; Wurden, G. A.; Schmitz, O.; ...

    2016-08-08

    A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less

  8. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  9. Qualitative Evaluation of Fiducial Markers for Radiotherapy Imaging

    PubMed Central

    Chan, Maria F.; Cohen, Gil’ad N.; Deasy, Joseph O.

    2016-01-01

    Purpose To evaluate visibility, artifacts, and distortions of various commercial markers in magnetic resonance imaging (MRI), computer tomography (CT), and ultrasound imaging used for radiotherapy planning and treatment guidance. Methods We compare 2 solid gold markers, 4 gold coils, and 1 polymer marker from 3 vendors. Imaging modalities used were 3-T and 1.5-T GE MRIs, Siemens Sequoia 512 Ultrasound, Phillips Big Bore CT, Varian Trilogy linear accelerator (cone-beam CT [CBCT], on-board imager kilovoltage [OBI-kV], electronic portal imaging device megavoltage [EPID-MV]), and Medtronic O-ARM CBCT. Markers were imaged in a 30 × 30 × 10 cm3 custom bolus phantom. In one experiment, Surgilube was used around the markers to reduce air gaps. Images were saved in Digital Imaging and Communications in Medicine (DICOM) format and analyzed using an in-house software. Profiles across the markers were used for objective comparison of the markers’ signals. The visibility and artifacts/distortions produced by each marker were assessed qualitatively and quantitatively. Results All markers are visible in CT, CBCT, OBI-kV, and ultrasound. Gold markers below 0.75 mm in diameter are not visible in EPID-MV images. The larger the markers, the more CT and CBCT image artifacts there are, yet the degree of the artifact depends on scan parameters and the scanner itself. Visibility of gold coils of 0.75 mm diameter or larger is comparable across all imaging modalities studied. The polymer marker causes minimal artifacts in CT and CBCT but has poor visibility in EPID-MV. Gold coils of 0.5 mm exhibit poor visibility in MRI and EPID-MV due to their small size. Gold markers are more visible in 3-T T1 gradient-recalled echo than in 1.5-T T1 fast spin-echo, depending on the scan sequence. In this study, all markers are clearly visible on ultrasound. Conclusion All gold markers are visible in CT, CBCT, kV, and ultrasound; however, only the large diameter markers are visible in MV. When MR and EPID-MV imagers are used, the selection of fiducial markers is not straightforward. For hybrid kV/MV image-guided radiotherapy imaging, larger diameter markers are suggested. If using kV imaging alone, smaller sized markers may be used in smaller sized patients in order to reduce artifacts. Only larger diameter gold markers are visible across all imaging modalities. PMID:25230715

  10. Onion cell imaging by using Talbot/self-imaging effect

    NASA Astrophysics Data System (ADS)

    Agarwal, Shilpi; Kumar, Varun; Shakher, Chandra

    2017-08-01

    This paper presents the amplitude and phase imaging of onion epidermis cell using the self-imaging capabilities of a grating (Talbot effect) in visible light region. In proposed method, the Fresnel diffraction pattern from the first grating and object is recorded at self-image plane. Fast Fourier Transform (FFT) is used for extracting the 3D amplitude and phase image of onion epidermis cell. The stability of the proposed system, from environmental perturbation as well as its compactness and portability give the proposed system a high potential for several clinical applications.

  11. Tropical Depression 6 (Florence) in the Atlantic

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Microwave ImageVisible Light Image

    These infrared, microwave, and visible images were created with data retrieved by the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite.

    Infrared Image Because infrared radiation does not penetrate through clouds, AIRS infrared images show either the temperature of the cloud tops or the surface of the Earth in cloud-free regions. The lowest temperatures (in purple) are associated with high, cold cloud tops that make up the top of the storm. In cloud-free areas the AIRS instrument will receive the infrared radiation from the surface of the Earth, resulting in the warmest temperatures (orange/red).

    Microwave Image AIRS data used to create the microwave images come from the microwave radiation emitted by Earth's atmosphere which is then received by the instrument. It shows where the heaviest rainfall is taking place (in blue) in the storm. Blue areas outside of the storm, where there are either some clouds or no clouds, indicate where the sea surface shines through.

    Vis/NIR Image The AIRS instrument suite contains a sensor that captures light in the visible/near-infrared portion of the electromagnetic spectrum. These 'visible' images are similar to a snapshot taken with your camera.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  12. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    NASA Astrophysics Data System (ADS)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  13. Cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS flat panel detector: Visibility of simulated microcalcifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Youtao; Zhong, Yuncheng; Lai, Chao-Jen

    2013-10-15

    Purpose: To measure and investigate the improvement of microcalcification (MC) visibility in cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS/CsI flat panel detector (Dexela 2923, Perkin Elmer).Methods: Aluminum wires and calcium carbonate grains of various sizes were embedded in a paraffin cylinder to simulate imaging of calcifications in a breast. Phantoms were imaged with a benchtop experimental cone beam CT system at various exposure levels. In addition to the Dexela detector, a high pitch (50 μm), thin (150 μm) scintillator CMOS/CsI flat panel detector (C7921CA-09, Hamamatsu Corporation, Hamamatsu City, Japan) and a widelymore » used low pitch (194 μm), thick (600 μm) scintillator aSi/CsI flat panel detector (PaxScan 4030CB, Varian Medical Systems) were also used in scanning for comparison. The images were independently reviewed by six readers (imaging physicists). The MC visibility was quantified as the fraction of visible MCs and measured as a function of the estimated mean glandular dose (MGD) level for various MC sizes and detectors. The modulation transfer functions (MTFs) and detective quantum efficiencies (DQEs) were also measured and compared for the three detectors used.Results: The authors have demonstrated that the use of a high pitch (75 μm) CMOS detector coupled with a thick (500 μm) CsI scintillator helped make the smaller 150–160, 160–180, and 180–200 μm MC groups more visible at MGDs up to 10.8, 9, and 10.8 mGy, respectively. It also made the larger 200–212 and 212–224 μm MC groups more visible at MGDs up to 7.2 mGy. No performance improvement was observed for 224–250 μm or larger size groups. With the higher spatial resolution of the Dexela detector based system, the apparent dimensions and shapes of MCs were more accurately rendered. The results show that with the aforementioned detector, a 73% visibility could be achieved in imaging 160–180 μm MCs as compared to 28% visibility achieved by the low pitch (194 μm) aSi/CsI flat panel detector. The measurements confirm that the Hamamatsu detector has the highest MTF, followed by the Dexel detector, and then the Varian detector. However, the Dexela detector, with its thick (500 μm) CsI scintillator and low noise level, has the highest DQE at all frequencies, followed by the Varian detector, and then the Hamamatsu detector. The findings on the MC visibility correlated well with the differences in MTFs, noise power spectra, and DQEs measured for these three detectors.Conclusions: The authors have demonstrated that the use of the CMOS type Dexela detector with its high pitch (75 μm) and thick (500 μm) CsI scintillator could help improve the MC visibility. However, the improvement depended on the exposure level and the MC size. For imaging larger MCs or scanning at high exposure levels, there was little advantage in using the Dexela detector as compared to the aSi type Varian detector. These findings correlate well with the higher measured DQEs of the Dexela detector, especially at higher frequencies.« less

  14. Cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS flat panel detector: Visibility of simulated microcalcifications

    PubMed Central

    Shen, Youtao; Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C.

    2013-01-01

    Purpose: To measure and investigate the improvement of microcalcification (MC) visibility in cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS/CsI flat panel detector (Dexela 2923, Perkin Elmer). Methods: Aluminum wires and calcium carbonate grains of various sizes were embedded in a paraffin cylinder to simulate imaging of calcifications in a breast. Phantoms were imaged with a benchtop experimental cone beam CT system at various exposure levels. In addition to the Dexela detector, a high pitch (50 μm), thin (150 μm) scintillator CMOS/CsI flat panel detector (C7921CA-09, Hamamatsu Corporation, Hamamatsu City, Japan) and a widely used low pitch (194 μm), thick (600 μm) scintillator aSi/CsI flat panel detector (PaxScan 4030CB, Varian Medical Systems) were also used in scanning for comparison. The images were independently reviewed by six readers (imaging physicists). The MC visibility was quantified as the fraction of visible MCs and measured as a function of the estimated mean glandular dose (MGD) level for various MC sizes and detectors. The modulation transfer functions (MTFs) and detective quantum efficiencies (DQEs) were also measured and compared for the three detectors used. Results: The authors have demonstrated that the use of a high pitch (75 μm) CMOS detector coupled with a thick (500 μm) CsI scintillator helped make the smaller 150–160, 160–180, and 180–200 μm MC groups more visible at MGDs up to 10.8, 9, and 10.8 mGy, respectively. It also made the larger 200–212 and 212–224 μm MC groups more visible at MGDs up to 7.2 mGy. No performance improvement was observed for 224–250 μm or larger size groups. With the higher spatial resolution of the Dexela detector based system, the apparent dimensions and shapes of MCs were more accurately rendered. The results show that with the aforementioned detector, a 73% visibility could be achieved in imaging 160–180 μm MCs as compared to 28% visibility achieved by the low pitch (194 μm) aSi/CsI flat panel detector. The measurements confirm that the Hamamatsu detector has the highest MTF, followed by the Dexel detector, and then the Varian detector. However, the Dexela detector, with its thick (500 μm) CsI scintillator and low noise level, has the highest DQE at all frequencies, followed by the Varian detector, and then the Hamamatsu detector. The findings on the MC visibility correlated well with the differences in MTFs, noise power spectra, and DQEs measured for these three detectors. Conclusions: The authors have demonstrated that the use of the CMOS type Dexela detector with its high pitch (75 μm) and thick (500 μm) CsI scintillator could help improve the MC visibility. However, the improvement depended on the exposure level and the MC size. For imaging larger MCs or scanning at high exposure levels, there was little advantage in using the Dexela detector as compared to the aSi type Varian detector. These findings correlate well with the higher measured DQEs of the Dexela detector, especially at higher frequencies. PMID:24089917

  15. Cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS flat panel detector: visibility of simulated microcalcifications.

    PubMed

    Shen, Youtao; Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C

    2013-10-01

    To measure and investigate the improvement of microcalcification (MC) visibility in cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS/CsI flat panel detector (Dexela 2923, Perkin Elmer). Aluminum wires and calcium carbonate grains of various sizes were embedded in a paraffin cylinder to simulate imaging of calcifications in a breast. Phantoms were imaged with a benchtop experimental cone beam CT system at various exposure levels. In addition to the Dexela detector, a high pitch (50 μm), thin (150 μm) scintillator CMOS/CsI flat panel detector (C7921CA-09, Hamamatsu Corporation, Hamamatsu City, Japan) and a widely used low pitch (194 μm), thick (600 μm) scintillator aSi/CsI flat panel detector (PaxScan 4030CB, Varian Medical Systems) were also used in scanning for comparison. The images were independently reviewed by six readers (imaging physicists). The MC visibility was quantified as the fraction of visible MCs and measured as a function of the estimated mean glandular dose (MGD) level for various MC sizes and detectors. The modulation transfer functions (MTFs) and detective quantum efficiencies (DQEs) were also measured and compared for the three detectors used. The authors have demonstrated that the use of a high pitch (75 μm) CMOS detector coupled with a thick (500 μm) CsI scintillator helped make the smaller 150-160, 160-180, and 180-200 μm MC groups more visible at MGDs up to 10.8, 9, and 10.8 mGy, respectively. It also made the larger 200-212 and 212-224 μm MC groups more visible at MGDs up to 7.2 mGy. No performance improvement was observed for 224-250 μm or larger size groups. With the higher spatial resolution of the Dexela detector based system, the apparent dimensions and shapes of MCs were more accurately rendered. The results show that with the aforementioned detector, a 73% visibility could be achieved in imaging 160-180 μm MCs as compared to 28% visibility achieved by the low pitch (194 μm) aSi/CsI flat panel detector. The measurements confirm that the Hamamatsu detector has the highest MTF, followed by the Dexel detector, and then the Varian detector. However, the Dexela detector, with its thick (500 μm) CsI scintillator and low noise level, has the highest DQE at all frequencies, followed by the Varian detector, and then the Hamamatsu detector. The findings on the MC visibility correlated well with the differences in MTFs, noise power spectra, and DQEs measured for these three detectors. The authors have demonstrated that the use of the CMOS type Dexela detector with its high pitch (75 μm) and thick (500 μm) CsI scintillator could help improve the MC visibility. However, the improvement depended on the exposure level and the MC size. For imaging larger MCs or scanning at high exposure levels, there was little advantage in using the Dexela detector as compared to the aSi type Varian detector. These findings correlate well with the higher measured DQEs of the Dexela detector, especially at higher frequencies.

  16. ARC-1990-A91-2015

    NASA Image and Video Library

    1990-12-08

    This image of the crescent moon was obtained by the Galileo Solid-State Imaging System, taken at 5 am PST as the spacecraft neared Earth. The image was taken through a green filter and shows the western part of the lunar near side. The smallest features visible are 8 km (5 mi) in size. Major features visible include the dark plains of Mare Imbrium in the upper part of the image, the bright crater Copernicus (100 km, 60 miles in diameter) in the centeral part, and the heavily cratered lunar highlands in the bottom of the image. The landing sides of the Apollo 12,14 and 15 missions lie within the central part of the image. Samples returned from these sites will be used to calibrate this and accompanying images taken in different colors, which will extend the knowledge of the spectral and compositional properties of the near side of the moon, seen from Earth, to the lunar far side.

  17. Moon - Western Near Side

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image of the crescent moon was obtained by the Galileo Solid State imaging system on December 8 at 5 a.m. PST as the Galileo spacecraft neared the Earth. The image was taken through a green filter and shows the western part of the lunar nearside. The smallest features visible are 8 kilometers (5 miles) in size. Major features visible include the dark plains of Mare Imbrium in the upper part of the image, the bright crater Copernicus (100 km, 60 miles in diameter) in the central part, and the heavily cratered lunar highlands in the bottom of the image. The landing sides of the Apollo 12, 14 and 15 missions lie within the central part of the image. Samples returned from these sites will be used to calibrate this and accompanying images taken in different colors, which will extend the knowledge of the spectral and compositional properties of the nearside of the moon, seen from Earth, to the lunar far side.

  18. Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.

    PubMed

    Proença, Hugo

    2010-08-01

    Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.

  19. Coastal Research Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Lucey, Paul G.; Williams, Timothy; Horton, Keith A.

    2004-01-01

    The Coastal Research Imaging Spectrometer (CRIS) is an airborne remote sensing system designed specifically for research on the physical, chemical, and biological characteristics of coastal waters. The CRIS includes a visible-light hyperspectral imaging subsystem for measuring the color of water, which contains information on the biota, sediment, and nutrient contents of the water. The CRIS also includes an infrared imaging subsystem, which provides information on the temperature of the water. The combination of measurements enables investigation of biological effects of both natural and artificial flows of water from land into the ocean, including diffuse and point-source flows that may contain biological and/or chemical pollutants. Temperature is an important element of such measurements because temperature contrasts can often be used to distinguish among flows from different sources: for example, a sewage outflow could manifest itself in spectral images as a local high-temperature anomaly. Both the visible and infrared subsystems scan in pushbroom mode: that is, an aircraft carrying the system moves along a ground track, the system is aimed downward, and image data are acquired in across-track linear arrays of pixels. Both subsystems operate at a frame rate of 30 Hz. The infrared and visible-light optics are adjusted so that both subsystems are aimed at the same moving swath, which has across-track angular width of 15 . Data from the infrared and visible imaging subsystems are stored in the same file along with aircraft- position data acquired by a Global Positioning System receiver. The combination of the three sets of data is used to construct infrared and hyperspectral maps of scanned areas (see figure). The visible subsystem is based on a grating spectrograph and a rapid-readout charge-coupled-device camera. Images of the swatch are acquired in 256 spectral bands at wavelengths from 400 to 800 nm. The infrared subsystem, which is sensitive in a single wavelength band of 8 to 10 m, is based on a focal-plane array of HgCdTe photodetectors that are cooled to an operating temperature of 77 K by use of a closed-Stirling-cycle mechanical cooler. The nonuniformities of the HgCdTe photodetector array are small enough that the raw pixel data from the infrared subsystem can be used to recognize temperature differences on the order of 1 C. By use of a built-in blackbody calibration source that can be switched into the field of view, one can obtain bias and gain offset terms for individual pixels, making it possible to offset the effects of nonuniformities sufficiently to enable the measurement of temperature differences as small as 0.1 C.

  20. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  1. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  2. Angiographic and structural imaging using high axial resolution fiber-based visible-light OCT

    PubMed Central

    Pi, Shaohua; Camino, Acner; Zhang, Miao; Cepurna, William; Liu, Gangjun; Huang, David; Morrison, John; Jia, Yali

    2017-01-01

    Optical coherence tomography using visible-light sources can increase the axial resolution without the need for broader spectral bandwidth. Here, a high-resolution, fiber-based, visible-light optical coherence tomography system is built and used to image normal retina in rats and blood vessels in chicken embryo. In the rat retina, accurate segmentation of retinal layer boundaries and quantification of layer thicknesses are accomplished. Furthermore, three distinct capillary plexuses in the retina and the choriocapillaris are identified and the characteristic pattern of the nerve fiber layer thickness in rats is revealed. In the chicken embryo model, the microvascular network and a venous bifurcation are examined and the ability to identify and segment large vessel walls is demonstrated. PMID:29082087

  3. Compact survey and inspection day/night image sensor suite for small unmanned aircraft systems (EyePod)

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.

    2010-04-01

    EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.

  4. MARS PATHFINDER CAMERA TEST IN SAEF-2

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In the Spacecraft Assembly and Encapsulation Facility-2 (SAEF-2), workers from the Jet Propulsion Laboratory (JPL) are conducting a systems test of the imager for the Mars Pathfinder. The imager (white and metallic cylindrical element close to hand of worker at left) is a specially designed camera featuring a stereo- imaging system with color capability provided by a set of selectable filters. It is mounted atop an extendable mast on the Pathfinder lander. Visible to the far left is the small rover which will be deployed from the lander to explore the Martian surface. Transmitting back to Earth images of the trail left by the rover will be one of the mission objectives for the imager. To the left of the worker standing near the imager is the mast for the low-gain antenna; the round high-gain antenna is to the right. Visible in the background is the cruise stage that will carry the Pathfinder on a direct trajectory to Mars. The Mars Pathfinder is one of two Mars-bound spacecraft slated for launch aboard Delta II expendable launch vehicles this year.

  5. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  6. Scientific Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.

  7. Multispectral THz-VIS passive imaging system for hidden threats visualization

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw

    2013-10-01

    Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.

  8. Robotic intrafractional US guidance for liver SABR: System design, beam avoidance, and clinical imaging.

    PubMed

    Schlosser, Jeffrey; Gong, Ren Hui; Bruder, Ralf; Schweikard, Achim; Jang, Sungjune; Henrie, John; Kamaya, Aya; Koong, Albert; Chang, Daniel T; Hristov, Dimitre

    2016-11-01

    To present a system for robotic 4D ultrasound (US) imaging concurrent with radiotherapy beam delivery and estimate the proportion of liver stereotactic ablative body radiotherapy (SABR) cases in which robotic US image guidance can be deployed without interfering with clinically used VMAT beam configurations. The image guidance hardware comprises a 4D US machine, an optical tracking system for measuring US probe pose, and a custom-designed robot for acquiring hands-free US volumes. In software, a simulation environment incorporating the LINAC, couch, planning CT, and robotic US guidance hardware was developed. Placement of the robotic US hardware was guided by a target visibility map rendered on the CT surface by using the planning CT to simulate US propagation. The visibility map was validated in a prostate phantom and evaluated in patients by capturing live US from imaging positions suggested by the visibility map. In 20 liver SABR patients treated with VMAT, the simulation environment was used to virtually place the robotic hardware and US probe. Imaging targets were either planning target volumes (PTVs, range 5.9-679.5 ml) or gross tumor volumes (GTVs, range 0.9-343.4 ml). Presence or absence of mechanical interference with LINAC, couch, and patient body as well as interferences with treated beams was recorded. For PTV targets, robotic US guidance without mechanical interference was possible in 80% of the cases and guidance without beam interference was possible in 60% of the cases. For the smaller GTV targets, these proportions were 95% and 85%, respectively. GTV size (1/20), elongated shape (1/20), and depth (1/20) were the main factors limiting the availability of noninterfering imaging positions. The robotic US imaging system was deployed in two liver SABR patients during CT simulation with successful acquisition of 4D US sequences in different imaging positions. This study indicates that for VMAT liver SABR, robotic US imaging of a relevant internal target may be possible in 85% of the cases while using treatment plans currently deployed in the clinic. With beam replanning to account for the presence of robotic US guidance, intrafractional US may be an option for 95% of the liver SABR cases.

  9. Optical design and testing: introduction.

    PubMed

    Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin

    2014-10-10

    Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.

  10. FIZICS: fluorescent imaging zone identification system, a novel macro imaging system.

    PubMed

    Skwish, Stephen; Asensio, Francisco; King, Greg; Clarke, Glenn; Kath, Gary; Salvatore, Michael J; Dufresne, Claude

    2004-12-01

    Constantly improving biological assay development continues to drive technological requirements. Recently, a specification was defined for capturing white light and fluorescent images of agar plates ranging in size from the NUNC Omni tray (96-well footprint, 128 x 85 mm) to the NUNC Bio Assay Dish (245 x 245 mm). An evaluation of commercially available products failed to identify any system capable of fluorescent macroimaging with discrete wavelength selection. To address the lack of a commercially available system, a custom imaging system was designed and constructed. This system provides the same capabilities of many commercially available systems with the added ability to fluorescently image up to a 245 x 245 mm area using wavelengths in the visible light spectrum.

  11. An Auto-flag Method of Radio Visibility Data Based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Dai, Hui-mei; Mei, Ying; Wang, Wei; Deng, Hui; Wang, Feng

    2017-01-01

    The Mingantu Ultrawide Spectral Radioheliograph (MUSER) has entered a test observation stage. After the construction of the data acquisition and storage system, it is urgent to automatically flag and eliminate the abnormal visibility data so as to improve the imaging quality. In this paper, according to the observational records, we create a credible visibility set, and further obtain the corresponding flag model of visibility data by using the support vector machine (SVM) technique. The results show that the SVM is a robust approach to flag the MUSER visibility data, and can attain an accuracy of about 86%. Meanwhile, this method will not be affected by solar activities, such as flare eruptions.

  12. New Observations of UV Emissions from Europa

    NASA Technical Reports Server (NTRS)

    McGrath, Melissa; Sparks, William

    2009-01-01

    The recent top prioritization of the Europa Jupiter System Mission for the next outer solar system flagship mission is refocusing attention on Europa and the other Galilean satellites and their contextual environments in the Jupiter system. Surface sputtering by magnetospheric plasma generates a tenuous atmosphere for Europa, dominated by 02 gas. This tenuous gas is in turn excited by plasma electrons, producing ultraviolet and visible emissions. Two sets of imaging observations have been published to date, UV images from the Hubble Space Telescope, and visible eclipse images from Cassini. Three additional sets of HST UV observations were acquired in February 2007, April 2007 and June 2009. The signal to noise ratio in these data are not high, however, given the paucity of data and its increasing importance in terms of planning for EJSM, we have attempted to extract as much new information as possible from these data. This talk will summarize our analysis to date, and discuss them in terms of existing models, which attempt to explain the image morphology either in terms of the underlying source production and loss processes, or in terms of the plasma interaction with the exosphere.

  13. A compact bio-inspired visible/NIR imager for image-guided surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gao, Shengkui; Garcia, Missael; Edmiston, Chris; York, Timothy; Marinov, Radoslav; Mondal, Suman B.; Zhu, Nan; Sudlow, Gail P.; Akers, Walter J.; Margenthaler, Julie A.; Liang, Rongguang; Pepino, Marta; Achilefu, Samuel; Gruev, Viktor

    2016-03-01

    Inspired by the visual system of the morpho butterfly, we have designed, fabricated, tested and clinically translated an ultra-sensitive, light weight and compact imaging sensor capable of simultaneously capturing near infrared (NIR) and visible spectrum information. The visual system of the morpho butterfly combines photosensitive cells with spectral filters at the receptor level. The spectral filters are realized by alternating layers of high and low dielectric constant, such as air and cytoplasm. We have successfully mimicked this concept by integrating pixelated spectral filters, realized by alternating silicon dioxide and silicon nitrate layers, with an array of CCD detectors. There are four different types of pixelated spectral filters in the imaging plane: red, green, blue and NIR. The high optical density (OD) of all spectral filters (OD>4) allow for efficient rejections of photons from unwanted bands. The single imaging chip weighs 20 grams with form factor of 5mm by 5mm. The imaging camera is integrated with a goggle display system. A tumor targeted agent, LS301, is used to identify all spontaneous tumors in a transgenic PyMT murine model of breast cancer. The imaging system achieved sensitivity of 98% and selectivity of 95%. We also used our imaging sensor to locate sentinel lymph nodes (SLNs) in patients with breast cancer using indocyanine green tracer. The surgeon was able to identify 100% of SLNs when using our bio-inspired imaging system, compared to 93% when using information from the lymphotropic dye and 96% when using information from the radioactive tracer.

  14. First Dodo Trench with White Layer Visible in Dig Area

    NASA Technical Reports Server (NTRS)

    2008-01-01

    These color images were taken by NASA's Phoenix Mars Lander's Stereo Surface Imager on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The images of the trench shows a white layer that has been uncovered by the Robotic Arm (RA) scoop and is now visible in the wall of the trench. This trench was the first one dug by the RA to understand the Martian soil and plan the digging strategy.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. Research and design of an optical system of biochemical analyzer based on the narrow-band pass filter

    NASA Astrophysics Data System (ADS)

    Xiao, Ze-xin; Chen, Kuan

    2008-03-01

    Biochemical analyzer is one of the important instruments in the clinical diagnosis, and its optical system is the important component. The operation of this optical system can be regard as three parts. The first is transforms the duplicate colored light as the monochromatic light. The second is transforms the light signal of the monochromatic, which have the information of the measured sample, as the electric signal by use the photoelectric detector. And the last is to send the signal to data processing system by use the control system. Generally, there are three types monochromators: prism, optical grating and narrow-band pass filter. Thereinto, the narrow-band pass filter were widely used in the semi-auto biochemical analyzer. Through analysed the principle of biochemical analyzer base on the narrow-band pass filter, we known that the optical has three features. The first is the optical path of the optical system is a non- imaging system. The second, this system is wide spectrum region that contain visible light and ultraviolet spectrum. The third, this is a little aperture and little field monochromatic light system. Therefore, design idea of this optical system is: (1) luminous energy in the system less transmission loss; (2) detector coupled to the luminous energy efficient; mainly correct spherical aberration. Practice showed the point of Image quality evaluation: (1) dispersion circle diameter equal the receiving device pixel effective width of 125%, and the energy distribution should point target of 80% of energy into the receiving device pixel width of the effective diameter in this dispersion circle; (2) With MTF evaluation, the requirements in 20lp/ mm spatial frequency, the MTF values should not be lower than 0.6. The optical system should be fit in with ultraviolet and visible light width spectrum, and the detector image plane can but suited the majority visible light spectrum when by defocus optimization, and the image plane of violet and ultraviolet excursion quite large. Traditional biochemical analyzer optical design not fully consider this point, the authors introduce a effective image plane compensation measure innovatively, it greatly increased the reception efficiency of the violet and ultraviolet.

  16. Jovian Lightning and Moonlit Clouds

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Jovian lightning and moonlit clouds. These two images, taken 75 minutes apart, show lightning storms on the night side of Jupiter along with clouds dimly lit by moonlight from Io, Jupiter's closest moon. The images were taken in visible light and are displayed in shades of red. The images used an exposure time of about one minute, and were taken when the spacecraft was on the opposite side of Jupiter from the Earth and Sun. Bright storms are present at two latitudes in the left image, and at three latitudes in the right image. Each storm was made visible by multiple lightning strikes during the exposure. Other Galileo images were deliberately scanned from east to west in order to separate individual flashes. The images show that Jovian and terrestrial lightning storms have similar flash rates, but that Jovian lightning strikes are a few orders of magnitude brighter in visible light.

    The moonlight from Io allows the lightning storms to be correlated with visible cloud features. The latitude bands where the storms are seen seem to coincide with the 'disturbed regions' in daylight images, where short-lived chaotic motions push clouds to high altitudes, much like thunderstorms on Earth. The storms in these images are roughly one to two thousand kilometers across, while individual flashes appear hundreds of kilometer across. The lightning probably originates from the deep water cloud layer and illuminates a large region of the visible ammonia cloud layer from 100 kilometers below it.

    There are several small light and dark patches that are artifacts of data compression. North is at the top of the picture. The images span approximately 50 degrees in latitude and longitude. The lower edges of the images are aligned with the equator. The images were taken on October 5th and 6th, 1997 at a range of 6.6 million kilometers by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  17. Imaging reconstruction for infrared interferometry: first images of YSOs environment

    NASA Astrophysics Data System (ADS)

    Renard, S.; Malbet, F.; Thiébaut, E.; Berger, J.-P.

    2008-07-01

    The study of protoplanetary disks, where the planets are believed to form, will certainly allow the formation of our Solar System to be understood. To conduct observations of these objects at the milli-arcsecond scale, infrared interferometry provides the right performances for T Tauri, FU Ori or Herbig Ae/Be stars. However, the only information obtained so far are scarce visibility measurements which are directly tested with models. With the outcome of recent interferometers, one can foresee obtaining images reconstructed independently of the models. In fact, several interferometers including IOTA and AMBER on the VLTI already provide the possibility to recombine three telescopes at once and thus to obtain the data necessary to reconstruct images. In this paper, we describe the use of MIRA, an image reconstruction algorithm developed for optical interferometry data (squared visibilities and closure phases) by E. Thiébaut. We foresee also to use the spectral information given by AMBER data to constrain even better the reconstructed images. We describe the use of MIRA to reconstruct images of young stellar objects out of actual data, in particular the multiple system GW Orionis (IOTA, 2004), and discuss the encountered difficulties.

  18. Multiscale optical imaging of rare-earth-doped nanocomposites in a small animal model

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Ganapathy, Vidya; Kantamneni, Harini; Zhao, Xinyu; Sheng, Yang; Tan, Mei-Chee; Roth, Charles M.; Riman, Richard E.; Moghe, Prabhas V.; Pierce, Mark C.

    2018-03-01

    Rare-earth-doped nanocomposites have appealing optical properties for use as biomedical contrast agents, but few systems exist for imaging these materials. We describe the design and characterization of (i) a preclinical system for whole animal in vivo imaging and (ii) an integrated optical coherence tomography/confocal microscopy system for high-resolution imaging of ex vivo tissues. We demonstrate these systems by administering erbium-doped nanocomposites to a murine model of metastatic breast cancer. Short-wave infrared emissions were detected in vivo and in whole organ imaging ex vivo. Visible upconversion emissions and tissue autofluorescence were imaged in biopsy specimens, alongside optical coherence tomography imaging of tissue microstructure. We anticipate that this work will provide guidance for researchers seeking to image these nanomaterials across a wide range of biological models.

  19. Martian Terrain, Unfurled Rover Ramps & Deflated Airbags

    NASA Image and Video Library

    1997-07-05

    The Imager for Mars Pathfinder (IMP) took this image of surrounding terrain in the mid-morning on Mars (2:30 PM Pacific Daylight Time) earlier today. Part of the small rover, Sojourner, is visible on the left side of the picture. The tan cylinder to the right of the rover is one of two rolled-up ramps by which the rover will descend to the ground. The white, billowy material in the center of the picture is part of the airbag system. Many rocks of different shapes and sizes are visible between the lander and the horizon. Two hills are visible on the horizon. The notch on the left side of the leftmost conical hill is an artifact of the processing of this picture. http://photojournal.jpl.nasa.gov/catalog/PIA00613

  20. Imaging of Biological Tissues by Visible Light CDI

    NASA Astrophysics Data System (ADS)

    Karpov, Dmitry; Dos Santos Rolo, Tomy; Rich, Hannah; Fohtung, Edwin

    Recent advances in the use of synchrotron and X-ray free electron laser (XFEL) based coherent diffraction imaging (CDI) with application to material sciences and medicine proved the technique to be efficient in recovering information about the samples encoded in the phase domain. The current state-of-the-art algorithms of reconstruction are transferable to optical frequencies, which makes laser sources a reasonable milestone both in technique development and applications. Here we present first results from table-top laser CDI system for imaging of biological tissues and reconstruction algorithms development and discuss approaches that are complimenting the data quality improvement that is applicable to visible light frequencies due to it's properties. We demonstrate applicability of the developed methodology to a wide class of soft bio-matter and condensed matter systems. This project is funded by DOD-AFOSR under Award No FA9550-14-1-0363 and the LANSCE Professorship at LANL.

  1. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  2. Two-dimensional vacuum ultraviolet images in different MHD events on the EAST tokamak

    NASA Astrophysics Data System (ADS)

    Zhijun, WANG; Xiang, GAO; Tingfeng, MING; Yumin, WANG; Fan, ZHOU; Feifei, LONG; Qing, ZHUANG; EAST Team

    2018-02-01

    A high-speed vacuum ultraviolet (VUV) imaging telescope system has been developed to measure the edge plasma emission (including the pedestal region) in the Experimental Advanced Superconducting Tokamak (EAST). The key optics of the high-speed VUV imaging system consists of three parts: an inverse Schwarzschild-type telescope, a micro-channel plate (MCP) and a visible imaging high-speed camera. The VUV imaging system has been operated routinely in the 2016 EAST experiment campaign. The dynamics of the two-dimensional (2D) images of magnetohydrodynamic (MHD) instabilities, such as edge localized modes (ELMs), tearing-like modes and disruptions, have been observed using this system. The related VUV images are presented in this paper, and it indicates the VUV imaging system is a potential tool which can be applied successfully in various plasma conditions.

  3. Radiation dose reduction in chest radiography using a flat-panel amorphous silicon detector.

    PubMed

    Hosch, W P; Fink, C; Radeleff, B; kampschulte a, A; Kauffmann, G W; Hansmann, J

    2002-10-01

    The aim of this study was to evaluate the image quality and the potential for radiation dose reduction with a digital flat-panel amorphous silicon detector radiography system. Using flat-panel technology, radiographs of an anthropomorphic thorax phantom were taken with a range of technical parameters (125kV, 200mA and 5, 4, 3.2, 2, 1, 0.5, and 0.25mAs) which were equivalent to a radiation dose of 332, 263, 209, 127, 58.7, 29, and 14 microGy, respectively. These images were compared to radiographs obtained by a conventional film-screen radiography system at 125kV, 200mA and 5mAs (equivalent to 252 microGy) which served as reference. Three observers evaluated independently the visibility of simulated rounded lesions and anatomical structures, comparing printed films from the flat-panel amorphous silicon detector and conventional x-ray system films. With flat-panel technology, the visibility of rounded lesions and normal anatomical structures at 5, 4, and 3.2mAs was superior compared to the conventional film-screen radiography system. (P< or =0.0001). At 2mAs, improvement was only marginal (P=0.19). At 1.0, 0.5 and 0.25mAs, the visibility of simulated rounded lesions was worse (P< or =0.004). Comparing fine lung parenchymal structures, the flat-panel amorphous silicon detector showed improvement for all exposure levels down to 2mAs and equality at 1mAs. Compared to a conventional x-ray film system, the flat-panel amorphous silicon detector demonstrated improved image quality and the possibility for a reduction of the radiation dose by 50% without loss in image quality.

  4. Visible and Near-IR Imaging of Giant Planets: Outer Manifestations of Deeper Secrets

    NASA Astrophysics Data System (ADS)

    Hammel, Heidi B.

    1996-09-01

    Visible and near-infrared imaging of the giant planets -- Jupiter, Saturn, Uranus, and Neptune -- probes the outermost layers of clouds in these gaseous atmospheres. Not only are the images beautiful and striking in their color and diversity of detail, they also provide quantitative clues to the dynamical and chemical processes taking place both at the cloud tops and deeper in the interior: zonal wind profiles can be extracted; wavelength-dependent center-to-limb brightness variations yield valuable data for modeling vertical aerosol structure; the presence of planetary-scale atmospheric waves can sometimes be deduced; variations of cloud color and brightness with latitude provide insight into the underlying mechanisms driving circulation; development and evolution of discrete atmospheric features trace both exogenic and endogenic events. During the 1980's, our understanding of the giant planets was revolutionized by detailed visible-wavelength images taken by the Voyager spacecraft of these planets' atmospheres. However, those images were static: brief snapshots in time of four complex and dynamic atmospheric systems. In short, those images no longer represent the current appearance of these planets. Recently, our knowledge of the atmospheres of the gas giant planets has undergone major new advances, due in part to the excellent imaging capability and longer-term temporal sampling of the Hubble Space Telescope (HST) and the Galileo Mission to Jupiter. In this talk, I provide an update on our current understanding of the gas giants based on recent visible and near-infrared imaging, highlighting results from the collision of Comet Shoemaker-Levy 9 with Jupiter, Saturn's White Spots, intriguing changes in the atmosphere of Uranus, and Neptune's peripatetic clouds.

  5. Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.

    PubMed

    Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung

    2018-03-19

    We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.

  6. Single underwater image enhancement based on color cast removal and visibility restoration

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  7. Using advertisement light-panel and CMOS image sensor with frequency-shift-keying for visible light communication.

    PubMed

    Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liao, Xin-Lan; Lin, Kun-Hsien; Wang, Yi-Chang; Chen, Yi-Yuan

    2018-05-14

    A frequency-shift-keying (FSK) visible light communication (VLC) system is proposed and demonstrated using advertisement light-panel as transmitter and mobile-phone image sensor as receiver. The developed application program (APP) in mobile-phone can retrieve the rolling shutter effect (RSE) pattern produced by the FSK VLC signal effectively. Here, we also define noise-ratio value (NRV) to evaluate the contrast of different advertisements displayed on the light-panel. Both mobile-phones under test can achieve success rate > 96% even when the transmission distance is up to 200 cm and the NRVs are low.

  8. High visibility temporal ghost imaging with classical light

    NASA Astrophysics Data System (ADS)

    Liu, Jianbin; Wang, Jingjing; Chen, Hui; Zheng, Huaibin; Liu, Yanyan; Zhou, Yu; Li, Fu-li; Xu, Zhuo

    2018-03-01

    High visibility temporal ghost imaging with classical light is possible when superbunching pseudothermal light is employed. In the numerical simulation, the visibility of temporal ghost imaging with pseudothermal light, equaling (4 . 7 ± 0 . 2)%, can be increased to (75 ± 8)% in the same scheme with superbunching pseudothermal light. The reasons for that the retrieved images are different for superbunching pseudothermal light with different values of degree of second-order coherence are discussed in detail. It is concluded that high visibility and high quality temporal ghost image can be obtained by collecting sufficient number of data points. The results are helpful to understand the difference between ghost imaging with classical light and entangled photon pairs. The superbunching pseudothermal light can be employed to improve the image quality in ghost imaging applications.

  9. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  10. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  11. Metasurface optics for full-color computational imaging.

    PubMed

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  12. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  13. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  14. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    PubMed

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  15. Expansive Northern Volcanic Plains

    NASA Image and Video Library

    2015-04-16

    Mercury northern region is dominated by expansive smooth plains, created by huge amounts of volcanic material flooding across Mercury surface in the past, as seen by NASA MESSENGER spacecraft. The volcanic lava flows buried craters, leaving only traces of their rims visible. Such craters are called ghost craters, and there are many visible in this image, including a large one near the center. Wrinkle ridges cross this scene and small troughs are visible regionally within ghost craters, formed as a result of the lava cooling. The northern plains are often described as smooth since their surface has fewer impact craters and thus has been less battered by such events. This indicates that these volcanic plains are younger than Mercury's rougher surfaces. Instrument: Mercury Dual Imaging System (MDIS) Center Latitude: 60.31° N Center Longitude: 36.87° E Scale: The large ghost crater at the center of the image is approximately 103 kilometers (64 miles) in diameter http://photojournal.jpl.nasa.gov/catalog/PIA19415

  16. Research on range-gated laser active imaging seeker

    NASA Astrophysics Data System (ADS)

    You, Mu; Wang, PengHui; Tan, DongJie

    2013-09-01

    Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.

  17. Improved in-plane visibility of tumors using breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Ruschin, Mark; Timberg, Pontus; Svahn, Tony; Andersson, Ingvar; Hemdal, Bengt; Mattsson, Sören; Båth, Magnus; Tingberg, Anders

    2007-03-01

    The purpose of this work was to evaluate and compare the visibility of tumors in digital mammography (DM) and breast tomosynthesis (BT) images. Images of the same women were acquired on both a DM system (Mammomat Novation, Siemens) and a BT prototype system adapted from the same type of DM system. Simulated 3D tumors (average dimension: 8.4 mm x 6.6 mm x 5 mm) were projected and added to each DM image as well as each BT projection image prior to 3D reconstruction. The same beam quality and approximately the same total absorbed dose were used for each breast image acquisition on both systems. Two simulated tumors were added to each of thirty breast scans, yielding sixty cases. A series of 4-alternative forced choice (4-AFC) human observer performance experiments were conducted in order to determine what projected tumor signal intensity in the DM images would be needed to achieve the same detectability as in the reconstructed BT images. Nine observers participated. For the BT experiment, when the tumor signal intensity on the central projection was 0.010 the mean percent of correct responses (PC) was measured to be 81.5%, which converted to a detectability index value (d') of 1.96. For the DM experiments, the same detectability was achieved at a signal intensity determined to be 0.038. Equivalent tumor detection in BT images were thus achieved at around four times less projected signal intensity than in DM images, indicating that the use of BT may lead to earlier detection of breast cancer.

  18. Multispectral laser-induced fluorescence imaging system for large biological samples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren

    2003-07-01

    A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.

  19. Low-cost panoramic infrared surveillance system

    NASA Astrophysics Data System (ADS)

    Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George

    2017-05-01

    A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.

  20. Investigation of an acoustical holography system for real-time imaging

    NASA Astrophysics Data System (ADS)

    Fecht, Barbara A.; Andre, Michael P.; Garlick, George F.; Shelby, Ronald L.; Shelby, Jerod O.; Lehman, Constance D.

    1998-07-01

    A new prototype imaging system based on ultrasound transmission through the object of interest -- acoustical holography -- was developed which incorporates significant improvements in acoustical and optical design. This system is being evaluated for potential clinical application in the musculoskeletal system, interventional radiology, pediatrics, monitoring of tumor ablation, vascular imaging and breast imaging. System limiting resolution was estimated using a line-pair target with decreasing line thickness and equal separation. For a swept frequency beam from 2.6 - 3.0 MHz, the minimum resolution was 0.5 lp/mm. Apatite crystals were suspended in castor oil to approximate breast microcalcifications. Crystals from 0.425 - 1.18 mm in diameter were well resolved in the acoustic zoom mode. Needle visibility was examined with both a 14-gauge biopsy needle and a 0.6 mm needle. The needle tip was clearly visible throughout the dynamic imaging sequence as it was slowly inserted into a RMI tissue-equivalent breast biopsy phantom. A selection of human images was acquired in several volunteers: a 25 year-old female volunteer with normal breast tissue, a lateral view of the elbow joint showing muscle fascia and tendon insertions, and the superficial vessels in the forearm. Real-time video images of these studies will be presented. In all of these studies, conventional sonography was used for comparison. These preliminary investigations with the new prototype acoustical holography system showed favorable results in comparison to state-of-the-art pulse-echo ultrasound and demonstrate it to be suitable for further clinical study. The new patient interfaces will facilitate orthopedic soft tissue evaluation, study of superficial vascular structures and potentially breast imaging.

  1. Area beam equalization: optimization and performance of an automated prototype system for chest radiography.

    PubMed

    Xu, Tong; Shikhaliev, Polad M; Berenji, Gholam R; Tehranzadeh, Jamshid; Saremi, Farhood; Molloi, Sabee

    2004-04-01

    To evaluate the feasibility and performance of an x-ray beam equalization system for chest radiography using anthropomorphic phantoms. Area beam equalization involves the process of the initial unequalized image acquisition, attenuator thickness calculation, mask generation using a 16 x 16 piston array, and final equalized image acquisition. Chest radiographs of three different anthropomorphic phantoms were acquired with no beam equalization and equalization levels of 4.8, 11.3, and 21. Six radiologists evaluated the images by scoring them from 1-5 using 13 different criteria. The dose was calculated using the known attenuator material thickness and the mAs of the x-ray tube. The visibility of anatomic structures in the under-penetrated regions of the chest radiographs was shown to be significantly (P < .01) improved after beam equalization. An equalization level of 4.8 provided most of the improvements with moderate increases in patient dose and tube loading. Higher levels of beam equalization did not show much improvement in the visibility of anatomic structures in the under-penetrated regions. A moderate level of x-ray beam equalization in chest radiography is superior to both conventional radiographs and radiographs with high levels of beam equalization. X-ray beam equalization can significantly improve the visibility of anatomic structures in the under-penetrated regions while maintaining good image quality in the lung region.

  2. Note: Optics design of a periscope for the KSTAR visible inspection system with mitigated neutron damages on the camera

    NASA Astrophysics Data System (ADS)

    Lee, Kyuhang; Ko, Jinseok; Wi, Hanmin; Chung, Jinil; Seo, Hyeonjin; Jo, Jae Heung

    2018-06-01

    The visible TV system used in the Korea Superconducting Tokamak Advanced Research device has been equipped with a periscope to minimize the damage on its CCD pixels from neutron radiation. The periscope with more than 2.3 m in overall length has been designed for the visible camera system with its semi-diagonal field of view as wide as 30° and its effective focal length as short as 5.57 mm. The design performance of the periscope includes the modulation transfer function greater than 0.25 at 68 cycles/mm with low distortion. The installed periscope system has confirmed the image qualities as designed and also as comparable as those from its predecessor but with far less probabilities of neutral damages on the camera.

  3. In vivo imaging of inducible tyrosinase gene expression with an ultrasound array-based photoacoustic system

    NASA Astrophysics Data System (ADS)

    Harrison, Tyler; Paproski, Robert J.; Zemp, Roger J.

    2012-02-01

    Tyrosinase, a key enzyme in the production of melanin, has shown promise as a reporter of genetic activity. While green fluorescent protein has been used extensively in this capacity, it is limited in its ability to provide information deep in tissue at a reasonable resolution. As melanin is a strong absorber of light, it is possible to image gene expression using tyrosinase with photoacoustic imaging technologies, resulting in excellent resolutions at multiple-centimeter depths. While our previous work has focused on creating and imaging MCF-7 cells with doxycycline-controlled tyrosinase expression, we have now established the viability of these cells in a murine model. Using an array-based photoacoustic imaging system with 5 MHz center frequency, we capture interleaved ultrasound and photoacoustic images of tyrosinase-expressing MCF-7 tumors both in a tissue mimicking phantom, and in vivo. Images of both the tyrosinase-expressing tumor and a control tumor are presented as both coregistered ultrasound-photoacoustic B-scan images and 3-dimensional photoacoustic volumes created by mechanically scanning the transducer. We find that the tyrosinase-expressing tumor is visible with a signal level 12dB greater than that of the control tumor in vivo. Phantom studies with excised tumors show that the tyrosinase-expressing tumor is visible at depths in excess of 2cm, and have suggested that our imaging system is sensitive to a transfection rate of less than 1%.

  4. ARC-1986-A86-7041

    NASA Image and Video Library

    1986-01-24

    Range : 236,000 km. ( 147,000 mi. ) Resolution : 33 km. ( 20 mi. ) P-29525B/W This Voyager 2 image reveals a contiuos distribution of small particles throughout the Uranus ring system. This unigue geometry, the highest phase angle at which Voyager imaged the rings, allows us to see lanes of fine dust particles not visible from other viewing angles. All the previously known rings are visible. However, some of the brightest features in the image are bright dust lanes not previously seen. the combination of this unique geometry and a long, 96 second exposure allowed this spectacular observation, acquired through the clear filter if Voyager 2's wide angle camera. the long exposure produced a noticable, non-uniform smear, as well as streaks due to trailed stars.

  5. Multispectral remote sensing from unmanned aircraft: image processing workflows and applications for rangeland environments

    USDA-ARS?s Scientific Manuscript database

    Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...

  6. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    EPA Science Inventory

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  7. Nile Delta

    NASA Image and Video Library

    2013-06-19

    Urbanized areas of northern Egypt are visible amidst the deserts of Egypt. The image captured July 9-15, 2012 also shows the Nile River which provides life-sustaining water to the region. The image was created from the Visible-Infrared Imager/Radiometer Suite (VIIRS) instrument aboard the Suomi National Polar-orbiting Partnership or Suomi NPP satellite, a partnership between NASA and the National Oceanic and Atmospheric Administration, or NOAA. Credit: NASA/NOAA To read more go to: www.nasa.gov/mission_pages/NPP/news/vegetation.html NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Toward in vivo diagnosis of skin cancer using multimode imaging dermoscopy: (II) molecular mapping of highly pigmented lesions

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.

    2014-03-01

    We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.

  9. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  10. Bizarre Temperatures on Mimas

    NASA Image and Video Library

    2010-03-29

    This image shows NASA Cassini spacecraft imaging science subsystem visible-light mosaic of Mimas from previous flybys on the left. The right-hand image shows new infrared temperature data mapped on top of the visible-light image.

  11. Pulsed laser linescanner for a backscatter absorption gas imaging system

    DOEpatents

    Kulp, Thomas J.; Reichardt, Thomas A.; Schmitt, Randal L.; Bambha, Ray P.

    2004-02-10

    An active (laser-illuminated) imaging system is described that is suitable for use in backscatter absorption gas imaging (BAGI). A BAGI imager operates by imaging a scene as it is illuminated with radiation that is absorbed by the gas to be detected. Gases become "visible" in the image when they attenuate the illumination creating a shadow in the image. This disclosure describes a BAGI imager that operates in a linescanned manner using a high repetition rate pulsed laser as its illumination source. The format of this system allows differential imaging, in which the scene is illuminated with light at least 2 wavelengths--one or more absorbed by the gas and one or more not absorbed. The system is designed to accomplish imaging in a manner that is insensitive to motion of the camera, so that it can be held in the hand of an operator or operated from a moving vehicle.

  12. Near-Infrared Coloring via a Contrast-Preserving Mapping Model.

    PubMed

    Chang-Hwan Son; Xiao-Ping Zhang

    2017-11-01

    Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.

  13. Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.

    2008-01-01

    NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.

  14. Viewer Makes Radioactivity "Visible"

    NASA Technical Reports Server (NTRS)

    Yin, L. I.

    1983-01-01

    Battery operated viewer demonstrates feasibility of generating threedimensional visible light simulations of objects that emit X-ray or gamma rays. Ray paths are traced for two pinhold positions to show location of reconstructed image. Images formed by pinholes are converted to intensified visible-light images. Applications range from radioactivity contamination surveys to monitoring radioisotope absorption in tumors.

  15. NASA Catches Tropical Storm Leslie and Hurricane Michael in the Atlantic

    NASA Image and Video Library

    2017-12-08

    This visible image of Tropical Storm Leslie and Hurricane Michael was taken by the MODIS instrument aboard both NASA's Aqua and Terra satellites on Sept. 9 at 12:50 p.m. EDT. Credit: NASA Goddard/MODIS Rapid Response Team -- Satellite images from two NASA satellites were combined to create a full picture of Tropical Storm Leslie and Hurricane Michael spinning in the Atlantic Ocean. Imagery from NASA's Aqua and Terra satellites showed Leslie now past Bermuda and Michael in the north central Atlantic, and Leslie is much larger than the smaller, more powerful Michael. Images of each storm were taken by the Moderate Resolution Imaging Spectroradiometer, or MODIS instrument that flies onboard both the Aqua and Terra satellites. Both satellites captured images of both storms on Sept. 7 and Sept. 10. The image from Sept. 7 showed a much more compact Michael with a visible eye. By Sept. 10, the eye was no longer visible in Michael and the storm appeared more elongated from south to north. To continue reading go to: 1.usa.gov/NkUPqn NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Optical Layout Analysis of Polarization Interference Imaging Spectrometer by Jones Calculus in View of both Optical Throughput and Interference Fringe Visibility

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanni; Zhang, Chunmin

    2013-01-01

    A polarization interference imaging spectrometer based on Savart polariscope was presented. Its optical throughput was analyzed by Jones calculus. The throughput expression was given, and clearly showed that the optical throughput mainly depended on the intensity of incident light, transmissivity, refractive index and the layout of optical system. The simulation and analysis gave the optimum layout in view of both optical throughput and interference fringe visibility, and verified that the layout of our former design was optimum. The simulation showed that a small deviation from the optimum layout influenced interference fringe visibility little for the optimum one, but influenced severely for others, so a small deviation is admissible in the optimum, and this can mitigate the manufacture difficulty. These results pave the way for further research and engineering design.

  17. An Inventory of Impact Craters on the Martian South Polar Layered Deposits

    NASA Technical Reports Server (NTRS)

    Plaut, J. J.

    2005-01-01

    The polar layered deposits (PLD) of Mars continue to be a focus of study due to the possibility that these finely layered, volatile-rich deposits hold a record of recent eras in Martian climate history. Recently, the visible sensor on 2001 Mars Odyssey s Thermal Emission Imaging System (THEMIS) has acquired 36 meter/pixel contiguous single-band visible image data sets of both the north and the south polar layered deposits, during the local spring and summer seasons. In addition, significant coverage has been obtained at the THEMIS visible sensor s full resolution of 18 meters/pixel. This paper reports on the use of these data sets to further characterize the population of impact craters on the south polar layered deposits (SPLD), and the implications of the observed population for the age and evolution of the SPLD.

  18. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  19. Needle tip visibility in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Arif, Muhammad; Moelker, Adriaan; van Walsum, Theo

    2017-03-01

    Needle visibility is of crucial importance for ultrasound guided interventional procedures. However, several factors, such as shadowing by bone or gas and tissue echogenic properties similar to needles, may compromise needle visibility. Additionally, small angle between the ultrasound beam and the needle, as well as small gauged needles may reduce visibility. Variety in needle tips design may also affect needle visibility. Whereas several studies have investigated needle visibility in 2D ultrasound imaging, no data is available for 3D ultrasound imaging, a modality that has great potential for image guidance interventions1. In this study, we evaluated needle visibility using a 3D ultrasound transducer. We examined different needles in a tissue mimicking liver phantom at three angles (200, 550 and 900) and quantify their visibility. The liver phantom was made by 5% polyvinyl alcohol solution containing 1% Silica gel particles to act as ultrasound scattering particles. We used four needles; two biopsy needles (Quick core 14G and 18G), one Ablation needle (Radiofrequency Ablation 17G), and Initial puncture needle (IP needle 17G). The needle visibility was quantified by calculating contrast to noise ratio. The results showed that the visibility for all needles were almost similar at large angles. However the difference in visibility at lower angles is more prominent. Furthermore, the visibility increases with the increase in angle of ultrasound beam with needles.

  20. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) onboard calibration system

    NASA Technical Reports Server (NTRS)

    Chrien, Thomas G.; Eastwood, Mike; Green, Robert O.; Sarture, Charles; Johnson, Howell; Chovit, Chris; Hajek, Pavel

    1995-01-01

    The AVIRIS instrument uses an onboard calibration system to provide auxiliary calibration data. The system consist of a tungsten halogen cycle lamp imaged onto a fiber bundle through an eight position filter wheel. The fiber bundle illuminates the back side of the foreoptics shutter during a pre-run and post-run calibration sequence. The filter wheel contains two neutral density filters, five spectral filters and one blocked position. This paper reviews the general workings of the onboard calibrator system and discusses recent modifications.

  1. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less

  2. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  3. Accelerated speckle imaging with the ATST visible broadband imager

    NASA Astrophysics Data System (ADS)

    Wöger, Friedrich; Ferayorni, Andrew

    2012-09-01

    The Advanced Technology Solar Telescope (ATST), a 4 meter class telescope for observations of the solar atmosphere currently in construction phase, will generate data at rates of the order of 10 TB/day with its state of the art instrumentation. The high-priority ATST Visible Broadband Imager (VBI) instrument alone will create two data streams with a bandwidth of 960 MB/s each. Because of the related data handling issues, these data will be post-processed with speckle interferometry algorithms in near-real time at the telescope using the cost-effective Graphics Processing Unit (GPU) technology that is supported by the ATST Data Handling System. In this contribution, we lay out the VBI-specific approach to its image processing pipeline, put this into the context of the underlying ATST Data Handling System infrastructure, and finally describe the details of how the algorithms were redesigned to exploit data parallelism in the speckle image reconstruction algorithms. An algorithm re-design is often required to efficiently speed up an application using GPU technology; we have chosen NVIDIA's CUDA language as basis for our implementation. We present our preliminary results of the algorithm performance using our test facilities, and base a conservative estimate on the requirements of a full system that could achieve near real-time performance at ATST on these results.

  4. Mobile Aerial Tracking and Imaging System (MATRIS) for Aeronautical Research

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.; Blanchard, R. C.; Miller, G. M.

    2004-01-01

    A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles (RLVs) as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of RLV configurations. During that study NASA teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility (MDA/ISTEF) to test techniques and analysis on two Space Shuttle flights.

  5. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  6. Trans-rectal ultrasound visibility of prostate lesions identified by magnetic resonance imaging increases accuracy of image-fusion targeted biopsies.

    PubMed

    Ukimura, Osamu; Marien, Arnaud; Palmer, Suzanne; Villers, Arnauld; Aron, Manju; de Castro Abreu, Andre Luis; Leslie, Scott; Shoji, Sunao; Matsugasumi, Toru; Gross, Mitchell; Dasgupta, Prokar; Gill, Inderbir S

    2015-11-01

    To compare the diagnostic yield of targeted prostate biopsy using image-fusion of multi-parametric magnetic resonance (mp-MR) with real-time trans-rectal ultrasound (TRUS) for clinically significant lesions that are suspicious only on mp-MR versus lesions that are suspicious on both mp-MR and TRUS. Pre-biopsy MRI and TRUS were each scaled on a 3-point score: highly suspicious, likely, and unlikely for clinically significant cancer (sPCa). Using an MR-TRUS elastic image-fusion system (Koelis), a 127 consecutive patients with a suspicious clinically significant index lesion on pre-biopsy mp-MR underwent systematic biopsies and MR/US-fusion targeted biopsies (01/2010-09/2013). Biopsy histological outcomes were retrospectively compared with MR suspicion level and TRUS-visibility of the MR-suspicious lesion. sPCa was defined as biopsy Gleason score ≥7 and/or maximum cancer core length ≥5 mm. Targeted biopsies outperformed systematic biopsies in overall cancer detection rate (61 vs. 41 %; p = 0.007), sPCa detection rate (43 vs. 23 %; p = 0.0013), cancer core length (7.5 vs. 3.9 mm; p = 0.0002), and cancer rate per core (56 vs. 12 %; p < 0.0001), respectively. Highly suspicious lesions on mp-MR correlated with higher positive biopsy rate (p < 0.0001), higher Gleason score (p = 0.018), and greater cancer core length (p < 0.0001). Highly suspicious lesions on TRUS in corresponding to MR-suspicious lesion had a higher biopsy yield (p < 0.0001) and higher sPCa detection rate (p < 0.0001). Since majority of MR-suspicious lesions were also suspicious on TRUS, TRUS-visibility allowed selection of the specific MR-visible lesion which should be targeted from among the multiple TRUS suspicious lesions in each prostate. MR-TRUS fusion-image-guided biopsies outperformed systematic biopsies. TRUS-visibility of a MR-suspicious lesion facilitates image-guided biopsies, resulting in higher detection of significant cancer.

  7. Development and validation of satellite-based estimates of surface visibility

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V < 30 km), low (2 km ≤ V < 10 km), and poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear-sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  8. Development and validation of satellite based estimates of surface visibility

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V < 30 km), Low (2 km ≤ V < 10 km) and Poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  9. Flight model performances of HISUI hyperspectral sensor onboard ISS (International Space Station)

    NASA Astrophysics Data System (ADS)

    Tanii, Jun; Kashimura, Osamu; Ito, Yoshiyuki; Iwasaki, Akira

    2016-10-01

    Hyperspectral Imager Suite (HISUI) is a next-generation Japanese sensor that will be mounted on Japanese Experiment Module (JEM) of ISS (International Space Station) in 2019 as timeframe. HISUI hyperspectral sensor obtains spectral images of 185 bands with the ground sampling distance of 20x31 meter from the visible to shortwave-infrared region. The sensor system is the follow-on mission of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) in the visible to shortwave infrared region. The critical design review of the instrument was accomplished in 2014. Integration and tests of an flight model of HISUI hyperspectral sensor is being carried out. Simultaneously, the development of JEM-External Facility (EF) Payload system for the instrument started. The system includes the structure, the thermal control system, the electrical system and the pointing mechanism. The development status and the performances including some of the tests results of Instrument flight model, such as optical performance, optical distortion and radiometric performance are reported.

  10. Enhanced visible and near-infrared capabilities of the JET mirror-linked divertor spectroscopy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomanowski, B. A., E-mail: b.a.lomanowski@durham.ac.uk; Sharples, R. M.; Meigs, A. G.

    2014-11-15

    The mirror-linked divertor spectroscopy diagnostic on JET has been upgraded with a new visible and near-infrared grating and filtered spectroscopy system. New capabilities include extended near-infrared coverage up to 1875 nm, capturing the hydrogen Paschen series, as well as a 2 kHz frame rate filtered imaging camera system for fast measurements of impurity (Be II) and deuterium Dα, Dβ, Dγ line emission in the outer divertor. The expanded system provides unique capabilities for studying spatially resolved divertor plasma dynamics at near-ELM resolved timescales as well as a test bed for feasibility assessment of near-infrared spectroscopy.

  11. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  12. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid.

    PubMed

    Stankovic, Uros; van Herk, Marcel; Ploeger, Lennert S; Sonke, Jan-Jakob

    2014-06-01

    Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study were t(cup) for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.

  13. Simultaneous Luminescence Pressure and Temperature Measurement System for Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Surface pressures and temperatures are determined from visible emission brightness and green-to-red color ratioing of induced luminescence from a ceramic surface with an organic dye coating. A ceramic-dye matrix of porous silica ceramic with an adsorbed dye is developed for high-temperature pressure sensitivity and stability (up to 150 C). Induced luminescence may be excited using a broad range of incident radiation from visible blue light (488-nm wavelength) to the near ultraviolet (365 nm). Ceramic research models and test samples are fabricated using net-form slip-casting and sintering techniques. Methods of preparation and effects of adsorption film thickness on measurement sensitivity are discussed. With the present 8-bit imaging system a 10% pressure measurement uncertainty from 50 to 760 torr is estimated, with an improvement to 5% from 3 to 1500 torr with a 12-bit imaging system.

  14. Visible and Extended Near-Infrared Multispectral Imaging for Skin Cancer Diagnosis

    PubMed Central

    Rey-Barroso, Laura; Burgos-Fernández, Francisco J.; Delpueyo, Xana; Ares, Miguel; Malvehy, Josep; Puig, Susana

    2018-01-01

    With the goal of diagnosing skin cancer in an early and noninvasive way, an extended near infrared multispectral imaging system based on an InGaAs sensor with sensitivity from 995 nm to 1613 nm was built to evaluate deeper skin layers thanks to the higher penetration of photons at these wavelengths. The outcomes of this device were combined with those of a previously developed multispectral system that works in the visible and near infrared range (414 nm–995 nm). Both provide spectral and spatial information from skin lesions. A classification method to discriminate between melanomas and nevi was developed based on the analysis of first-order statistics descriptors, principal component analysis, and support vector machine tools. The system provided a sensitivity of 78.6% and a specificity of 84.6%, the latter one being improved with respect to that offered by silicon sensors. PMID:29734747

  15. Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-01

    ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength

  16. Studying the Sky/Planets Can Drown You in Images: Machine Learning Solutions at JPL/Caltech

    NASA Technical Reports Server (NTRS)

    Fayyad, U. M.

    1995-01-01

    JPL is working to develop a domain-independent system capable of small-scale object recognition in large image databases for science analysis. Two applications discussed are the cataloging of three billion sky objects in the Sky Image Cataloging and Analysis Tool (SKICAT) and the detection of possibly one million small volcanoes visible in the Magellan synthetic aperture radar images of Venus (JPL Adaptive Recognition Tool, JARTool).

  17. Invisible Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team

  18. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  19. Rare earth phosphors and phosphor screens

    DOEpatents

    Buchanan, Robert A.; Maple, T. Grant; Sklensky, Alden F.

    1981-01-01

    This invention relates to rare earth phosphor screens for converting image carrying incident radiation to image carrying visible or near-visible radiation and to the rare earth phosphor materials utilized in such screens. The invention further relates to methods for converting image carrying charged particles to image carrying radiation principally in the blue and near-ultraviolet region of the spectrum and to stabilized rare earth phosphors characterized by having a continuous surface layer of the phosphors of the invention. More particularly, the phosphors of the invention are oxychlorides and oxybromides of yttrium, lanthanum and gadolinium activated with trivalent cerium and the conversion screens are of the type illustratively including x-ray conversion screens, image amplifier tube screens, neutron imaging screens, cathode ray tube screens, high energy gamma ray screens, scintillation detector screens and screens for real-time translation of image carrying high energy radiation to image carrying visible or near-visible radiation.

  20. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  1. Identification of handheld objects for electro-optic/FLIR applications

    NASA Astrophysics Data System (ADS)

    Moyer, Steve K.; Flug, Eric; Edwards, Timothy C.; Krapels, Keith A.; Scarbrough, John

    2004-08-01

    This paper describes research on the determination of the fifty-percent probability of identification cycle criterion (N50) for two sets of handheld objects. The first set consists of 12 objects which are commonly held in a single hand. The second set consists of 10 objects commonly held in both hands. These sets consist of not only typical civilian handheld objects but also objects that are potentially lethal. A pistol, a cell phone, a rocket propelled grenade (RPG) launcher, and a broom are examples of the objects in these sets. The discrimination of these objects is an inherent part of homeland security, force protection, and also general population security. Objects were imaged from each set in the visible and mid-wave infrared (MWIR) spectrum. Various levels of blur are then applied to these images. These blurred images were then used in a forced choice perception experiment. Results were analyzed as a function of blur level and target size to give identification probability as a function of resolvable cycles on target. These results are applicable to handheld object target acquisition estimates for visible imaging systems and MWIR systems. This research provides guidance in the design and analysis of electro-optical systems and forward-looking infrared (FLIR) systems for use in homeland security, force protection, and also general population security.

  2. NASA Sees Hurricane Arthur's Cloud-Covered Eye

    NASA Image and Video Library

    2014-07-03

    This visible image of Tropical Storm Arthur was taken by the MODIS instrument aboard NASA's Aqua satellite on July 2 at 18:50 UTC (2:50 p.m. EDT). A cloud-covered eye is clearly visible. Credit: NASA Goddard MODIS Rapid Response Team Read more: www.nasa.gov/content/goddard/arthur-atlantic/ NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Forensic applications of chemical imaging: latent fingerprint detection using visible absorption and luminescence.

    PubMed

    Exline, David L; Wallace, Christie; Roux, Claude; Lennard, Chris; Nelson, Matthew P; Treado, Patrick J

    2003-09-01

    Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.

  4. High throughput phenotyping of tomato spotted wilt disease in peanuts using unmanned aerial systems and multispectral imaging

    USDA-ARS?s Scientific Manuscript database

    The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by quadcopter for detecting tomato spot wilt virus amongst twenty genetic varieties of peanuts. The plants were visually assessed to acquire ground ...

  5. Venus in Violet and Near Infrared Light

    NASA Image and Video Library

    1996-02-01

    These images of the Venus clouds were taken by NASA Galileo Solid State Imaging System February 13,1990, at a range of about 1 million miles. The smallest detail visible is about 20 miles. They show the state of the clouds near the top of Venus cloud. http://photojournal.jpl.nasa.gov/catalog/PIA00071

  6. Monitoring cotton (Gossypium hirsutum L.) germination using ultrahigh-resolution UAS images

    USDA-ARS?s Scientific Manuscript database

    Examination of seed germination rate is of great importance for growers early in the season to determine the necessity for replanting their fields. The objective of this study was to explore the potential of using unmanned aircraft system (UAS)-based visible-band images to monitor and quantify the c...

  7. Moon - North Polar Mosaic, Color

    NASA Technical Reports Server (NTRS)

    1996-01-01

    During its flight, the Galileo spacecraft returned images of the Moon. The Galileo spacecraft surveyed the Moon on December 7, 1992, on its way to explore the Jupiter system in 1995-1997. The left part of this north pole view is visible from Earth. This color picture is a mosaic assembled from 18 images taken by Galileo's imaging system through a green filter. The left part of this picture shows the dark, lava-filled Mare Imbrium (upper left); Mare Serenitatis (middle left), Mare Tranquillitatis (lower left), and Mare Crisium, the dark circular feature toward the bottom of the mosaic. Also visible in this view are the dark lava plains of the Marginis and Smythii Basins at the lower right. The Humboldtianum Basin, a 650-kilometer (400-mile) impact structure partly filled with dark volcanic deposits, is seen at the center of the image. The Moon's north pole is located just inside the shadow zone, about a third of the way from the top left of the illuminated region. The Galileo project is managed for NASA's Office of Space Science by the Jet Propulsion Laboratory.

  8. Multiscale optical imaging of rare-earth-doped nanocomposites in a small animal model.

    PubMed

    Higgins, Laura M; Ganapathy, Vidya; Kantamneni, Harini; Zhao, Xinyu; Sheng, Yang; Tan, Mei-Chee; Roth, Charles M; Riman, Richard E; Moghe, Prabhas V; Pierce, Mark C

    2018-03-01

    Rare-earth-doped nanocomposites have appealing optical properties for use as biomedical contrast agents, but few systems exist for imaging these materials. We describe the design and characterization of (i) a preclinical system for whole animal in vivo imaging and (ii) an integrated optical coherence tomography/confocal microscopy system for high-resolution imaging of ex vivo tissues. We demonstrate these systems by administering erbium-doped nanocomposites to a murine model of metastatic breast cancer. Short-wave infrared emissions were detected in vivo and in whole organ imaging ex vivo. Visible upconversion emissions and tissue autofluorescence were imaged in biopsy specimens, alongside optical coherence tomography imaging of tissue microstructure. We anticipate that this work will provide guidance for researchers seeking to image these nanomaterials across a wide range of biological models. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. Variable waveband infrared imager

    DOEpatents

    Hunter, Scott R.

    2013-06-11

    A waveband imager includes an imaging pixel that utilizes photon tunneling with a thermally actuated bimorph structure to convert infrared radiation to visible radiation. Infrared radiation passes through a transparent substrate and is absorbed by a bimorph structure formed with a pixel plate. The absorption generates heat which deflects the bimorph structure and pixel plate towards the substrate and into an evanescent electric field generated by light propagating through the substrate. Penetration of the bimorph structure and pixel plate into the evanescent electric field allows a portion of the visible wavelengths propagating through the substrate to tunnel through the substrate, bimorph structure, and/or pixel plate as visible radiation that is proportional to the intensity of the incident infrared radiation. This converted visible radiation may be superimposed over visible wavelengths passed through the imaging pixel.

  10. Jupiter's Southern Hemisphere in the Near-Infrared (Time Set 1)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mosaic of Jupiter's southern hemisphere between -10 and -80 degrees (south) latitude. In time sequence one, the planetary limb is visible in near the bottom right part of the mosaic.

    Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the brightness and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including two large vortices, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two vortices in the center of the mosaic is about 3500 kilometers. The right oval is rotating counterclockwise, like other anticyclonic bright vortices in Jupiter's atmosphere. The left vortex is a cyclonic (clockwise) vortex. The differences between them (their brightness, their symmetry, and their behavior) are clues to how Jupiter's atmosphere works. The cloud features visible at 756 nanometers (near-infrared light) are at an atmospheric pressure level of about 1 bar.

    North is at the top. The images are projected onto a sphere, with features being foreshortened towards the south and east. The smallest resolved features are tens of kilometers in size. These images were taken on May 7, 1997, at a range of 1.5 million kilometers by the Solid State Imaging system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  11. Showing Some Chemistry

    NASA Image and Video Library

    2015-04-16

    During NASA MESSENGER four-year orbital mission, the spacecraft X-Ray Spectrometer XRS instrument mapped out the chemical composition of Mercury and discovered striking regions of chemical diversity. These maps of magnesium/silicon (left) and aluminium/silicon (right) use red colors to indicate high values and blue colors for low values. In the maps shown here, the Caloris basin can be identified as a region with low Mg/Si and high Ca/Si on the upper left of each map. An extensive region with high Mg/Si is also clearly visible in the maps but is not correlated with any visible impact basin. Instrument: X-Ray Spectrometer (XRS) and Mercury Dual Imaging System (MDIS) Left Image: Map of Mg/Si Right Image: Map of Al/Si http://photojournal.jpl.nasa.gov/catalog/PIA19417

  12. Remote sensing characterization of the Animas River watershed, southwestern Colorado, by AVIRIS imaging spectroscopy

    USGS Publications Warehouse

    Dalton, J.B.; Bove, D.J.; Mladinich, C.S.

    2005-01-01

    Visible-wavelength and near-infrared image cubes of the Animas River watershed in southwestern Colorado have been acquired by the Jet Propulsion Laboratory's Airborne Visible and InfraRed Imaging Spectrometer (AVIRIS) instrument and processed using the U.S. Geological Survey Tetracorder v3.6a2 implementation. The Tetracorder expert system utilizes a spectral reference library containing more than 400 laboratory and field spectra of end-member minerals, mineral mixtures, vegetation, manmade materials, atmospheric gases, and additional substances to generate maps of mineralogy, vegetation, snow, and other material distributions. Major iron-bearing, clay, mica, carbonate, sulfate, and other minerals were identified, among which are several minerals associated with acid rock drainage, including pyrite, jarosite, alunite, and goethite. Distributions of minerals such as calcite and chlorite indicate a relationship between acid-neutralizing assemblages and stream geochemistry within the watershed. Images denoting material distributions throughout the watershed have been orthorectified against digital terrain models to produce georeferenced image files suitable for inclusion in Geographic Information System databases. Results of this study are of use to land managers, stakeholders, and researchers interested in understanding a number of characteristics of the Animas River watershed.

  13. Early Results from the Odyssey THEMIS Investigation

    NASA Technical Reports Server (NTRS)

    Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy

    2003-01-01

    The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.

  14. Development of detailed design concepts for the EarthCARE multi-spectral imager

    NASA Astrophysics Data System (ADS)

    Lobb, Dan; Escadero, Isabel; Chang, Mark; Gode, Sophie

    2017-11-01

    The EarthCARE mission is dedicated to the study of clouds by observations from a satellite in low Earth orbit. The payload will include major radar and LIDAR instruments, supported by a multi-spectral imager (MSI) and a broadband radiometer. The paper describes development of detailed design concepts for the MSI, and analysis of critical performance parameters. The MSI will form Earth images at 500m ground sample distance (GSD) over a swath width of 150km, from a nominal platform altitude of around 400km. The task of the MSI is to provide spatial context for the single-point measurements made by the radar and LIDAR systems; it will image Earth in 7 spectral bands: one visible, one near-IR, two short-wave IR and three thermal IR. The MSI instrument will be formed in two parts: a visible-NIR-SWIR (VNS) system, radiometrically calibrated using a sunilluminated diffuser, and a thermal IR (TIR) system calibrated using cold space and an internal black-body. The VNS system will perform push-broom imaging, using linear array detectors (silicon and InGaAs) and 4 separate lenses. The TIR system will use a microbolometer array detector in a time delay and integration (TDI) mode. Critical issues discussed for the VNS system include detector selection and detailed optical design trade-offs. The latter are related to the desirability of dichroics to achieve a common aperture, which influences the calibration hardware and lens design. The TIR system's most significant problems relate to control of random noise and bias errors, requiring optimisation of detector operation and calibration procedures.

  15. An Accreting Protoplanet: Confirmation and Characterization of LkCa15b

    NASA Astrophysics Data System (ADS)

    Follette, Katherine; Close, Laird; Males, Jared; Macintosh, Bruce; Sallum, Stephanie; Eisner, Josh; Kratter, Kaitlin M.; Morzinski, Katie; Hinz, Phil; Weinberger, Alycia; Rodigas, Timothy J.; Skemer, Andrew; Bailey, Vanessa; Vaz, Amali; Defrere, Denis; spalding, eckhart; Tuthill, Peter

    2015-12-01

    We present a visible light adaptive optics direct imaging detection of a faint point source separated by just 93 milliarcseconds (~15 AU) from the young star LkCa 15. Using Magellan AO's visible light camera in Simultaneous Differential Imaging (SDI) mode, we imaged the star at Hydrogen alpha and in the neighboring continuum as part of the Giant Accreting Protoplanet Survey (GAPplanetS) in November 2015. The continuum images provide a sensitive and simultaneous probe of PSF residuals and instrumental artifacts, allowing us to isolate H-alpha accretion luminosity from the LkCa 15b protoplanet, which lies well inside of the LkCa15 transition disk gap. This detection, combined with a nearly simultaneous near-infrared detection with the Large Binocular Telescope, provides an unprecedented glimpse at a planetary system during epoch of planet formation. [Nature result in press. Please embargo until released

  16. The HR 4796A Debris System: Discovery of Extensive Exo-ring Dust Material

    NASA Astrophysics Data System (ADS)

    Schneider, Glenn; Debes, John H.; Grady, Carol A.; Gáspár, Andras; Henning, Thomas; Hines, Dean C.; Kuchner, Marc J.; Perrin, Marshall; Wisniewski, John P.

    2018-02-01

    The optically and IR-bright and starlight-scattering HR 4796A ringlike debris disk is one of the most- (and best-) studied exoplanetary debris systems. The presence of a yet-undetected planet has been inferred (or suggested) from the narrow width and inner/outer truncation radii of its r = 1.″05 (77 au) debris ring. We present new, highly sensitive Hubble Space Telescope (HST) visible-light images of the HR 4796A circumstellar debris system and its environment over a very wide range of stellocentric angles from 0.″32 (23 au) to ≈15″ (1100 au). These very high-contrast images were obtained with the Space Telescope Imaging Spectrograph (STIS) using six-roll PSF template–subtracted coronagraphy suppressing the primary light of HR 4796A, with three image-plane occulters, and simultaneously subtracting the background light from its close angular proximity M2.5V companion. The resulting images unambiguously reveal the debris ring embedded within a much larger, morphologically complex, and biaxially asymmetric exo-ring scattering structure. These images at visible wavelengths are sensitive to and map the spatial distribution, brightness, and radial surface density of micron-size particles over 5 dex in surface brightness. These particles in the exo-ring environment may be unbound from the system and interacting with the local ISM. Herein, we present a new morphological and photometric view of the larger-than-prior-seen HR 4796A exoplanetary debris system with sensitivity to small particles at stellocentric distances an order of magnitude greater than has previously been observed.

  17. A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA DETAILS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These are two views of a highly active region of star birth located northeast of the central cluster, R136, in 30 Doradus. The orientation and scale are identical for both views. The top panel is a composite of images in two colors taken with the Hubble Space Telescope's visible-light camera, the Wide Field and Planetary Camera 2 (WFPC2). The bottom panel is a composite of pictures taken through three infrared filters with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). In both cases the colors of the displays were chosen to correlate with the nebula's and stars' true colors. Seven very young objects are identified with numbered arrows in the infrared image. Number 1 is a newborn, compact cluster dominated by a triple system of 'hefty' stars. It has formed within the head of a massive dust pillar pointing toward R136. The energetic outflows from R136 have shaped the pillar and triggered the collapse of clouds within its summit to form the new stars. The radiation and outflows from these new stars have in turn blown off the top of the pillar, so they can be seen in the visible-light as well as the infrared image. Numbers 2 and 3 also pinpoint newborn stars or stellar systems inside an adjacent, bright-rimmed pillar, likewise oriented toward R136. These objects are still immersed within their natal dust and can be seen only as very faint, red points in the visible-light image. They are, however, among the brightest objects in the infrared image, since dust does not block infrared light as much as visible light. Thus, numbers 2 and 3 and number 1 correspond respectively to two successive stages in the birth of massive stars. Number 4 is a very red star that has just formed within one of several very compact dust clouds nearby. Number 5 is another very young triple-star system with a surrounding cluster of fainter stars. They also can be seen in the visible-light picture. Most remarkable are the glowing patches numbered 6 and 7, which astronomers have interpreted as 'impact points' produced by twin jets of material slamming into surrounding dust clouds. These 'impact points' are perfectly aligned on opposite sides of number 5 (the triple-star system), and each is separated from the star system by about 5 light-years. The jets probably originate from a circumstellar disk around one of the young stars in number 5. They may be rotating counterclockwise, thus producing moving, luminous patches on the surrounding dust, like a searchlight creating spots on clouds. These infrared patches produced by jets from a massive, young star are a new astronomical phenomenon. Credits for NICMOS image: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)

  18. Electrowetting based infrared lens using ionic liquids

    NASA Astrophysics Data System (ADS)

    Hu, Xiaodong; Zhang, Shiguo; Liu, Yu; Qu, Chao; Lu, Liujin; Ma, Xiangyuan; Zhang, Xiaoping; Deng, Youquan

    2011-11-01

    We demonstrated an infrared variable focus ionic liquids lens using electrowetting, which could overcome the problems caused by use of water, e.g., evaporation and poor thermostability, while keeping good optical transparency in visible light and near-infrared region. Besides, the type of lens (convex or concave) could be tuned by applied voltage or refractive index of ILs used, and the transmittance was measured to exceed 90% over the spectrum of visible light and near-infrared. We believe this infrared variable focus ionic liquids lens has a great application prospect in both visible light and infrared image systems.

  19. Programmable spectral engine design of hyperspectral image projectors based on digital micro-mirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang

    2017-02-01

    Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.

  20. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  1. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  2. Final Report: Non-Visible, Automated Target Acquisition and Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Fabris, Lorenzo; Goddard, James K.

    The Roadside Tracker (RST) represents a new approach to radiation portal monitors. It uses a combination of gamma-ray and visible-light imaging to localize gamma-ray radiation sources to individual vehicles in free-flowing, multi-lane traffic. Deployed as two trailers that are parked on either side of the roadway (Fig. 1); the RST scans passing traffic with two large gamma-ray imagers, one mounted in each trailer. The system compensates for vehicle motion through the imager’s fields of view by using automated target acquisition and tracking (TAT) software applied to a stream of video images. Once a vehicle has left the field of view,more » the radiation image of that vehicle is analyzed for the presence of a source, and if one is found, an alarm is sounded. The gamma-ray image is presented to the operator together with the video image of the traffic stream when the vehicle was approximately closest to the system (Fig. 2). The offending vehicle is identified with a bounding box to distinguish it from other vehicles that might be present at the same time. The system was developed under a previous grant from the Department of Homeland Security’s (DHS’s) Domestic Nuclear Detection Office (DNDO). This report documents work performed with follow-on funding from DNDO to further advance the development of the RST. Specifically, the primary thrust was to extend the performance envelope of the system by replacing the visible-light video cameras used by the TAT software with sensors that would allow operation at night and during inclement weather. In particular, it was desired to allow operation after dark without requiring external lighting. As part of this work, the system software was also upgraded to allow the use of 64-bit computers, the current generation operating system (OS), software development environment (Windows 7 vs. Windows XP, and current Visual Studio.Net), and improved software version controls (GIT vs. Source Safe.) With the upgraded performance allowed by new computers, and the additional memory available in a 64-bit OS, the system was able to handle greater traffic densities, and this also allowed addition of the ability to handle stop-and-go traffic.« less

  3. Impact of Lesion Visibility on Transrectal Ultrasound on the Prediction of Clinically Significant Prostate Cancer (Gleason Score 3 + 4 or Greater) with Transrectal Ultrasound-Magnetic Resonance Imaging Fusion Biopsy.

    PubMed

    Garcia-Reyes, Kirema; Nguyen, Hao G; Zagoria, Ronald J; Shinohara, Katsuto; Carroll, Peter R; Behr, Spencer C; Westphalen, Antonio C

    2017-09-20

    The purpose of this study was to estimate the impact of lesion visibility with transrectal ultrasound on the prediction of clinically significant prostate cancer with transrectal ultrasound-magnetic resonance imaging fusion biopsy. This HIPAA (Health Insurance Portability and Accountability Act) compliant, institutional review board approved, retrospective study was performed in 178 men who were 64.7 years old with prostate specific antigen 8.9 ng/ml. They underwent transrectal ultrasound-magnetic resonance imaging fusion biopsy from January 2013 to September 2016. Visible lesions on magnetic resonance imaging were assigned a PI-RADS™ (Prostate Imaging Reporting and Data System), version 2 score of 3 or greater. Transrectal ultrasound was positive when a hypoechoic lesion was identified. We used a 3-level, mixed effects logistic regression model to determine how transrectal ultrasound-magnetic resonance imaging concordance predicted the presence of clinically significant prostate cancer. The diagnostic performance of the 2 methods was estimated using ROC curves. A total of 1,331 sextants were targeted by transrectal ultrasound-magnetic resonance imaging fusion or systematic biopsies, of which 1,037 were negative, 183 were Gleason score 3 + 3 and 111 were Gleason score 3 + 4 or greater. Clinically significant prostate cancer was diagnosed by transrectal ultrasound and magnetic resonance imaging alone at 20.5% and 19.7% of these locations, respectively. Men with positive imaging had higher odds of clinically significant prostate cancer than men without visible lesions regardless of modality (transrectal ultrasound OR 14.75, 95% CI 5.22-41.69, magnetic resonance imaging OR 12.27, 95% CI 6.39-23.58 and the 2 modalities OR 28.68, 95% CI 14.45-56.89, all p <0.001). The ROC AUC to detect clinically significant prostate cancer using the 2 methods (0.85, 95% CI 0.81-0.89) was statistically greater than that of transrectal ultrasound alone (0.80, 95% CI 0.76-0.85, p = 0.001) and magnetic resonance imaging alone (0.83, 95% CI 0.79-0.87, p = 0.04). The sensitivity and specificity of transrectal ultrasound were 42.3% and 91.6%, and the sensitivity and specificity of magnetic resonance imaging were 62.2% and 84.1%, respectively. Lesion visibility on magnetic resonance imaging or transrectal ultrasound denotes a similar probability of clinically significant prostate cancer. This probability is greater when each examination is positive. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. Flight model of HISUI hyperspectral sensor onboard ISS (International Space Station)

    NASA Astrophysics Data System (ADS)

    Tanii, Jun; Kashimura, Osamu; Ito, Yoshiyuki; Iwasaki, Akira

    2017-09-01

    Hyperspectral Imager Suite (HISUI) is a next-generation Japanese sensor that will be mounted on Japanese Experiment Module (JEM) of ISS (International Space Station) in 2019 as timeframe. HISUI hyperspectral sensor obtains spectral images of 185 bands with the ground sampling distance of 20x31 meter from the visible to shortwave-infrared wavelength region. The sensor is the follow-on mission of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) in the visible to shortwave infrared region. The critical design review of the instrument was accomplished in 2014. Integration and tests of a Flight Model (FM) of HISUI hyperspectral sensor have been completed in the beginning of 2017. Simultaneously, the development of JEMExternal Facility (EF) Payload system for the instrument is being carried out. The system includes the structure, the thermal control sub-system and the electrical sub-system. The tests results of flight model, such as optical performance, optical distortion and radiometric performance are reported.

  5. Human retinal imaging using visible-light optical coherence tomography guided by scanning laser ophthalmoscopy

    PubMed Central

    Yi, Ji; Chen, Siyu; Shu, Xiao; Fawzi, Amani A.; Zhang, Hao F.

    2015-01-01

    We achieved human retinal imaging using visible-light optical coherence tomography (vis-OCT) guided by an integrated scanning laser ophthalmoscopy (SLO). We adapted a spectral domain OCT configuration and used a supercontinuum laser as the illumating source. The center wavelength was 564 nm and the bandwidth was 115 nm, which provided a 0.97 µm axial resolution measured in air. We characterized the sensitivity to be 86 dB with 226 µW incidence power on the pupil. We also integrated an SLO that shared the same optical path of the vis-OCT sample arm for alignment purposes. We demonstrated the retinal imaging from both systems centered at the fovea and optic nerve head with 20° × 20° and 10° × 10° field of view. We observed similar anatomical structures in vis-OCT and NIR-OCT. The contrast appeared different from vis-OCT to NIR-OCT, including slightly weaker signal from intra-retinal layers, and increased visibility and contrast of anatomical layers in the outer retina. PMID:26504622

  6. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  7. Hyperspectral venous image quality assessment for optimum illumination range selection based on skin tone characteristics

    PubMed Central

    2014-01-01

    Background Subcutaneous veins localization is usually performed manually by medical staff to find suitable vein to insert catheter for medication delivery or blood sample function. The rule of thumb is to find large and straight enough vein for the medication to flow inside of the selected blood vessel without any obstruction. The problem of peripheral difficult venous access arises when patient’s veins are not visible due to any reason like dark skin tone, presence of hair, high body fat or dehydrated condition, etc. Methods To enhance the visibility of veins, near infrared imaging systems is used to assist medical staff in veins localization process. Optimum illumination is crucial to obtain a better image contrast and quality, taking into consideration the limited power and space on portable imaging systems. In this work a hyperspectral image quality assessment is done to get the optimum range of illumination for venous imaging system. A database of hyperspectral images from 80 subjects has been created and subjects were divided in to four different classes on the basis of their skin tone. In this paper the results of hyper spectral image analyses are presented in function of the skin tone of patients. For each patient, four mean images were constructed by taking mean with a spectral span of 50 nm within near infrared range, i.e. 750–950 nm. Statistical quality measures were used to analyse these images. Conclusion It is concluded that the wavelength range of 800 to 850 nm serve as the optimum illumination range to get best near infrared venous image quality for each type of skin tone. PMID:25087016

  8. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  9. Land Cover Classification of the Jornada Experimental Range with Simulated HyspIRI Data

    NASA Astrophysics Data System (ADS)

    Thorp, K. R.; French, A. N.

    2011-12-01

    The proposed NASA mission, HyspIRI, would facilitate the use of hyperspectral satellite remote sensing images for monitoring a variety of Earth system processes. We utilized four years of AVIRIS data of the USDA Jornada Experimental Range in southern New Mexico to simulate the visible and near-infrared bands of the planned HyspIRI satellite. Vegetation dynamics at Jornada has been the subject of several recent studies due to concerns of invasive plant species encroaching on native rangeland grasses. Our objective was to assess the added value of simulated HyspIRI images to appropriately classify rangeland vegetation. The AVIRIS images were georeferenced to an orthophoto of the region and 's6' was implemented for atmospheric correction. Images were resampled to simulate HyspIRI wavebands in the visible and near-infrared. Supervised image classification based on observed spectra of rangeland vegetation species was used to map spatial vegetation cover class and temporal dynamics over four years. Forthcoming results will identify the added value of hyperspectral images, as compared to broadband images, for monitoring vegetation dynamics at Jornada.

  10. An image quality comparison study between XVI and OBI CBCT systems.

    PubMed

    Kamath, Srijit; Song, William; Chvetsov, Alexei; Ozawa, Shuichi; Lu, Haibin; Samant, Sanjiv; Liu, Chihray; Li, Jonathan G; Palta, Jatinder R

    2011-02-04

    The purpose of this study is to evaluate and compare image quality characteristics for two commonly used and commercially available CBCT systems: the X-ray Volumetric Imager and the On-Board Imager. A commonly used CATPHAN image quality phantom was used to measure various image quality parameters, namely, pixel value stability and accuracy, noise, contrast to noise ratio (CNR), high-contrast resolution, low contrast resolution and image uniformity. For the XVI unit, we evaluated the image quality for four manufacturer-supplied protocols as a function of mAs. For the OBI unit, we did the same for the full-fan and half-fan scanning modes, which were respectively used with the full bow-tie and half bow-tie filters. For XVI, the mean pixel values of regions of interest were found to generally decrease with increasing mAs for all protocols, while they were relatively stable with mAs for OBI. Noise was slightly lower on XVI and was seen to decrease with increasing mAs, while CNR increased with mAs for both systems. For XVI and OBI, the high-contrast resolution was approximately limited by the pixel resolution of the reconstructed image. On OBI images, up to 6 and 5 discs of 1% and 0.5% contrast, respectively, were visible for a high mAs setting using the full-fan mode, while none of the discs were clearly visible on the XVI images for various mAs settings when the medium resolution reconstruction was used. In conclusion, image quality parameters for XVI and OBI have been quantified and compared for clinical protocols under various mAs settings. These results need to be viewed in the context of a recent study that reported the dose-mAs relationship for the two systems and found that OBI generally delivered higher imaging doses than XVI.

  11. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  12. Anatomical education and surgical simulation based on the Chinese Visible Human: a three-dimensional virtual model of the larynx region.

    PubMed

    Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang

    2013-09-01

    Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.

  13. Photogrammetric mobile satellite service prediction

    NASA Technical Reports Server (NTRS)

    Akturan, Riza; Vogel, Wolfhard J.

    1994-01-01

    Photographic images of the sky were taken with a camera through a fisheye lens with a 180 deg field-of-view. The images of rural, suburban, and urban scenes were analyzed on a computer to derive quantitative information about the elevation angles at which the sky becomes visible. Such knowledge is needed by designers of mobile and personal satellite communications systems and is desired by customers of these systems. The 90th percentile elevation angle of the skyline was found to be 10 deg, 17 deg, and 51 deg in the three environments. At 8 deg, 75 percent, 75 percent, and 35 percent of the sky was visible, respectively. The elevation autocorrelation fell to zero with a 72 deg lag in the rural and urban environment and a 40 deg lag in the suburb. Mean estimation errors are below 4 deg.

  14. Topological properties of the limited penetrable horizontal visibility graph family

    NASA Astrophysics Data System (ADS)

    Wang, Minggang; Vilela, André L. M.; Du, Ruijin; Zhao, Longfeng; Dong, Gaogao; Tian, Lixin; Stanley, H. Eugene

    2018-05-01

    The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. In this work, we extend this algorithm to create a directed-limited penetrable horizontal visibility graph and an image-limited penetrable horizontal visibility graph. We define two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series. We perform several numerical simulations to check the accuracy of our theoretical results. Finally, we present an application of the directed-limited penetrable horizontal visibility graph to measure real-value time series irreversibility and an application of the image-limited penetrable horizontal visibility graph that discriminates noise from chaos. We also propose a method to measure the systematic risk using the image-limited penetrable horizontal visibility graph, and the empirical results show the effectiveness of our proposed algorithms.

  15. Mobile Aerial Tracking and Imaging System (MATrIS) for Aeronautical Research

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.; Blanchard, Robert C.; Miller, Geoffrey M.

    2004-01-01

    A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of reusable launch vehicle configurations. During that study the National Aeronautics and Space Administration teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility to test techniques and analysis on two Space Shuttle flights.

  16. Visibility and artifacts of gold fiducial markers used for image guided radiation therapy of pancreatic cancer on MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurney-Champion, Oliver J., E-mail: o.j.gurney-champion@amc.uva.nl; Lens, Eelco; Horst, Astrid van der

    2015-05-15

    Purpose: In radiation therapy of pancreatic cancer, tumor alignment prior to each treatment fraction is improved when intratumoral gold fiducial markers (from here onwards: markers), which are visible on computed tomography (CT) and cone beam CT, are used. Visibility of these markers on magnetic resonance imaging (MRI) might improve image registration between CT and magnetic resonance (MR) images for tumor delineation purposes. However, concomitant image artifacts induced by markers are undesirable. The extent of visibility and artifact size depend on MRI-sequence parameters. The authors’ goal was to determine for various markers their potential to be visible and to generate artifacts,more » using measures that are independent of the MRI-sequence parameters. Methods: The authors selected ten different markers suitable for endoscopic placement in the pancreas and placed them into a phantom. The markers varied in diameter (0.28–0.6 mm), shape, and iron content (0%–0.5%). For each marker, the authors calculated T{sub 2}{sup ∗}-maps and ΔB{sub 0}-maps using MRI measurements. A decrease in relaxation time T{sub 2}{sup ∗} can cause signal voids, associated with visibility, while a change in the magnetic field B{sub 0} can cause signal shifts, which are associated with artifacts. These shifts inhibit accurate tumor delineation. As a measure for potential visibility, the authors used the volume of low T{sub 2}{sup ∗}, i.e., the volume for which T{sub 2}{sup ∗} differed from the background by >15 ms. As a measure for potential artifacts, the authors used the volume for which |ΔB{sub 0}| > 9.4 × 10{sup −8} T (4 Hz). To test whether there is a correlation between visibility and artifact size, the authors calculated the Spearman’s correlation coefficient (R{sub s}) between the volume of low T{sub 2}{sup ∗} and the volume of high |ΔB{sub 0}|. The authors compared the maps with images obtained using a clinical MR-sequence. Finally, for the best visible marker as well as the marker that showed the smallest artifact, the authors compared the phantom data with in vivo MR-images in four pancreatic cancer patients. Results: The authors found a strong correlation (R{sub s} = 1.00, p < 0.01) between the volume of low T{sub 2}{sup ∗} and the volume with high |ΔB{sub 0}|. Visibility in clinical MR-images increased with lower T{sub 2}{sup ∗}. Signal shift artifacts became worse for markers with high |ΔB{sub 0}|. The marker that was best visible in the phantom, a folded marker with 0.5% iron content, was also visible in vivo, but showed artifacts on diffusion weighted images. The marker with the smallest artifact in the phantom, a small, stretched, ironless marker, was indiscernible on in vivo MR-images. Conclusions: Changes in T{sub 2}{sup ∗} and ΔB{sub 0} are sequence-independent measures for potential visibility and artifact size, respectively. Improved visibility of markers correlates strongly to signal shift artifacts; therefore, marker choice will depend on the clinical purpose. When visibility of the markers is most important, markers that contain iron are optimal, preferably in a folded configuration. For artifact sensitive imaging, small ironless markers are best, preferably in a stretched configuration.« less

  17. HIGH-SPEED IMAGING AND WAVEFRONT SENSING WITH AN INFRARED AVALANCHE PHOTODIODE ARRAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranec, Christoph; Atkinson, Dani; Hall, Donald

    2015-08-10

    Infrared avalanche photodiode (APD) arrays represent a panacea for many branches of astronomy by enabling extremely low-noise, high-speed, and even photon-counting measurements at near-infrared wavelengths. We recently demonstrated the use of an early engineering-grade infrared APD array that achieves a correlated double sampling read noise of 0.73 e{sup −} in the lab, and a total noise of 2.52 e{sup −} on sky, and supports simultaneous high-speed imaging and tip-tilt wavefront sensing with the Robo-AO visible-light laser adaptive optics (AO) system at the Palomar Observatory 1.5 m telescope. Here we report on the improved image quality simultaneously achieved at visible andmore » infrared wavelengths by using the array as part of an image stabilization control loop with AO-sharpened guide stars. We also discuss a newly enabled survey of nearby late M-dwarf multiplicity, as well as future uses of this technology in other AO and high-contrast imaging applications.« less

  18. Rare Ultra-blue Stars Found in Neighboring Galaxy's Hub

    NASA Image and Video Library

    2017-12-08

    Image release January 11, 2012 A new Hubble Space Telescope image centers on the 100-million-solar-mass black hole at the hub of the neighboring spiral galaxy M31, or the Andromeda galaxy, one of the few galaxies outside the Milky Way visible to the naked eye and the only other giant galaxy in the Local Group. This is the sharpest visible-light image ever made of the nucleus of an external galaxy. The Hubble image is being presented today at the meeting of the American Astronomical Society in Austin, Texas. To read more go to: www.nasa.gov/mission_pages/hubble/science/ultra-blue.html NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    Size West CONUS IR Image MPEG | Loop Visible Full Size West CONUS VIS Image MPEG | Loop Water Vapor Full Size West Conus WV Image MPEG | Loop Alaska Infrared Full Size Alaska IR Image Loop | Color Infrared Full Size Hawaii IR Image Loop | Color Visible Full Size Hawaii VIS Image Loop Water Vapor Full

  20. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    NASA Astrophysics Data System (ADS)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  1. Feasibility of detecting aflatoxin B1 on inoculated maize kernels surface using Vis/NIR hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    The feasibility of using a visible/near-infrared hyperspectral imaging system with a wavelength range between 400 and 1000 nm to detect and differentiate different levels of aflatoxin B1 (AFB1) artificially titrated on maize kernel surface was examined. To reduce the color effects of maize kernels, ...

  2. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  3. Clumps in the F Ring

    NASA Image and Video Library

    2004-03-12

    Scientists have only a rough idea of the lifetime of clumps in Saturn's rings - a mystery that Cassini may help answer. The latest images taken by the Cassini-Huygens spacecraft show clumps seemingly embedded within Saturn's narrow, outermost F ring. The narrow angle camera took the images on Feb. 23, 2004, from a distance of 62.9 million kilometers (39 million miles). The two images taken nearly two hours apart show these clumps as they revolve about the planet. The small dot at center right in the second image is one of Saturn's small moons, Janus, which is 181 kilometers, (112 miles) across. Like all particles in Saturn's ring system, these clump features orbit the planet in the same direction in which the planet rotates. This direction is clockwise as seen from Cassini's southern vantage point below the ring plane. Two clumps in particular, one of them extended, is visible in the upper part of the F ring in the image on the left, and in the lower part of the ring in the image on the right. Other knot-like irregularities in the ring's brightness are visible in the image on the right. The core of the F ring is about 50 kilometers (31miles) wide, and from Cassini's current distance, is not fully visible. The imaging team enhanced the contrast of the images and magnified them to aid visibility of the F ring and the clump features. The camera took the images with the green filter, which is centered at 568 nanometers. The image scale is 377 kilometers (234 miles) per pixel. NASA's two Voyager spacecraft that flew past Saturn in 1980 and 1981 were the first to see these clumps. The Voyager data suggest that the clumps change very little and can be tracked as they orbit for 30 days or more. No clump survived from the time of the first Voyager flyby to the Voyager 2 flyby nine months later. Scientists are not certain of the cause of these features. Among the theories proposed are meteoroid bombardments and inter-particle collisions in the F ring. http://photojournal.jpl.nasa.gov/catalog/PIA05382

  4. Estimation of cloud optical thickness by processing SEVIRI images and implementing a semi analytical cloud property retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Pandey, P.; De Ridder, K.; van Lipzig, N.

    2009-04-01

    Clouds play a very important role in the Earth's climate system, as they form an intermediate layer between Sun and the Earth. Satellite remote sensing systems are the only means to provide information about clouds on large scales. The geostationary satellite, Meteosat Second Generation (MSG) has onboard an imaging radiometer, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). SEVIRI is a 12 channel imager, with 11 channels observing the earth's full disk with a temporal resolution of 15 min and spatial resolution of 3 km at nadir, and a high resolution visible (HRV) channel. The visible channels (0.6 µm and 0.81 µm) and near infrared channel (1.6µm) of SEVIRI are being used to retrieve the cloud optical thickness (COT). The study domain is over Europe covering the region between 35°N - 70°N and 10°W - 30°E. SEVIRI level 1.5 images over this domain are being acquired from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) archive. The processing of this imagery, involves a number of steps before estimating the COT. The steps involved in pre-processing are as follows. First, the digital count number is acquired from the imagery. Image geo-coding is performed in order to relate the pixel positions to the corresponding longitude and latitude. Solar zenith angle is determined as a function of latitude and time. The radiometric conversion is done using the values of offsets and slopes of each band. The values of radiance obtained are then used to calculate the reflectance for channels in the visible spectrum using the information of solar zenith angle. An attempt is made to estimate the COT from the observed radiances. A semi analytical algorithm [Kokhanovsky et al., 2003] is implemented for the estimation of cloud optical thickness from the visible spectrum of light intensity reflected from clouds. The asymptotical solution of the radiative transfer equation, for clouds with large optical thickness, is the basis of this algorithm. The two visible channels of SEVIRI are used to find the COT and the near infra red channel to estimate the effective radius of droplets. Estimation of COT using a semi analytical scheme, which doesn't involve the conventional look-up table approach, is the aim of this work and henceforth, vertically integrated liquid water (w) or ice water content will be retrieved. The COT estimated and w obtained, will be compared with the values obtained from other approaches and will be validated with in situ measurements. Corresponding author address: Praveen Pandey, VITO - Flemish Institute for Technological Research, Boeretang 200, B 2400, Mol, Belgium. E-mail: praveen.pandey@vito.be

  5. Venus - Comparison of Venera and Magellan Resolutions

    NASA Image and Video Library

    1996-09-26

    These radar images show an identical area on Venus (centered at 110 degrees longitude and 64 degrees north latitude) as imaged by the U.S. NASA Magellan spacecraft in 1991 (left) and the U.S.S.R. Venera 15/16 spacecraft in the early 1980's (right). Illumination is from the left (or west) in the Magellan image (left) and from the right (or east) in the Venera image (right). Differences in apparent shading in the images are due to differences in the two radar imaging systems. Prior to Magellan, the Venera 15/16 data was the best available for scientists studying Venus. Much greater detail is visible in the Magellan image owing to the greater resolution of the Magellan radar system. In the area seen here, approximately 200 small volcanoes, ranging in diameter from 2 to 12 kilometers (1.2 to 7.4 miles) can be identified. These volcanoes were first identified as small hills in Venera 15/16 images and were predicted to be shield-type volcanoes constructed mainly from eruptions of fluid lava flows similar to those that produce the Hawaiian Islands and sea floor volcanoes - a prediction that was confirmed by Magellan. These small shield-type volcanoes are the most abundant geologic feature on the surface of Venus, believed to number in the hundreds of thousands, perhaps millions, and are important evidence in understanding the geologic evolution of the planet. The only other planet in our Solar System with this large number of volcanoes is Earth. Clearly visible in the Magellan image are details of volcano morphology, such as variation in slope, the occurrence and size range of summit craters, and geologic age relationships between adjacent volcanoes, as well as additional volcanoes that were not identifiable in the Venera image. http://photojournal.jpl.nasa.gov/catalog/PIA00465

  6. Investigation of imaging and flight guidance concepts for rotorcraft zero visibility approach and landing

    NASA Technical Reports Server (NTRS)

    Mckeown, W. L.

    1984-01-01

    A simulation experiment to explore the use of an augmented pictorial display to approach and land a helicopter in zero visibility conditions was conducted in a fixed base simulator. A literature search was also conducted to determine related work. A display was developed and pilot in-the-loop evaluations were conducted. The pictorial display was a simulated, high resolution radar image, augmented with various parameters to improve distance and motion cues. Approaches and landings were accomplished, but with higher workloads and less accuracy than necessary for a practical system. Recommendations are provided for display improvements and a follow on simulation study in a moving based simulator.

  7. Method and apparatus for calibrating a tiled display

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Jen (Inventor); Johnson, Michael J. (Inventor); Chandrasekhar, Rajesh (Inventor)

    2001-01-01

    A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts.

  8. Making Visible the Invisible

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Duncan Technologies, Inc., (DTI) developed an infrared imaging system for detection of hydrogen flames in the Space Shuttle Main Engines. The product is the result of a NASA Small Business Innovation Research (SBIR) award from the Stennis Space Center.

  9. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  10. Visible light high-resolution imaging system for large aperture telescope by liquid crystal adaptive optics with phase diversity technique.

    PubMed

    Xu, Zihao; Yang, Chengliang; Zhang, Peiguang; Zhang, Xingyun; Cao, Zhaoliang; Mu, Quanquan; Sun, Qiang; Xuan, Li

    2017-08-30

    There are more than eight large aperture telescopes (larger than eight meters) equipped with adaptive optics system in the world until now. Due to the limitations such as the difficulties of increasing actuator number of deformable mirror, most of them work in the infrared waveband. A novel two-step high-resolution optical imaging approach is proposed by applying phase diversity (PD) technique to the open-loop liquid crystal adaptive optics system (LC AOS) for visible light high-resolution adaptive imaging. Considering the traditional PD is not suitable for LC AOS, the novel PD strategy is proposed which can reduce the wavefront estimating error caused by non-modulated light generated by liquid crystal spatial light modulator (LC SLM) and make the residual distortions after open-loop correction to be smaller. Moreover, the LC SLM can introduce any aberration which realizes the free selection of phase diversity. The estimating errors are greatly reduced in both simulations and experiments. The resolution of the reconstructed image is greatly improved on both subjective visual effect and the highest discernible space resolution. Such technique can be widely used in large aperture telescopes for astronomical observations such as terrestrial planets, quasars and also can be used in other applications related to wavefront correction.

  11. Optical system design for a Lunar Optical Interferometer

    NASA Technical Reports Server (NTRS)

    Colavita, M. M.; Shao, M.; Hines, B. E.; Levine, B. M.; Gershman, R.

    1991-01-01

    The moon offers particular advantages for interferometry, including a vacuum environment, a large stable base on which to assemble multi-kilometer baselines, and a cold nighttime temperature to allow for passive cooling of optics for high IR sensitivity. A baseline design for a Lunar Optical Interferometer (LOI) which exploits these features is presented. The instrument operates in the visible to mid-IL region, and is designed for both astrometry and synthesis imaging. The design uses a Y-shaped array of 12 siderostats, with maximum arm lengths of about 1 km. The inner siderostats are monitored in three dimensions from a central laser metrology structure to allow for high precision astrometry. The outer siderostats, used primarily for synthesis imaging, exploit the availability of bright reference stars in order to determine the instrument geometry. The path delay function is partitioned into coarse and fine components, the former accomplished with switched banks of range mirrors monitored with an absolute laser metrology system, and the latter with a short cat's eye delay line. The back end of the instrument is modular, allowing for beam combiners for astrometry, visible and IR synthesis imaging, and direct planet detection. With 1 m apertures, the instrument will have a point-source imaging sensitivity of about 29 mag; with the laser metrology system, astrometry at the microarcsecond level will be possible.

  12. Design of large zoom for visible and infrared optical system in hemisphere space

    NASA Astrophysics Data System (ADS)

    Xing, Yang-guang; Li, Lin; Zhang, Juan

    2018-01-01

    In the field of space optical, the application of advanced optical instruments for related target detection and identification has become an advanced technology in modern optics. In order to complete the task of search in wide field of view and detailed investigation in small field of view, it is inevitable to use the structure of the zoom system to achieve a better observation for important targets. The innovation of this paper lies in using the zoom optical system in space detection, which achieve firstly military needs of searched target in the large field of view and recognized target in the small field of view. At the same time, this paper also completes firstly the design of variable focus optical detection system in the range of hemisphere space, the zoom optical system is working in the range of visible and infrared wavelengths, the perspective angle reaches 360 ° and the zoom ratio of the visible system is up to 15. The visible system has a zoom range of 60-900 mm, a detection band of 0.48-0.70μm, and a F-number of 2.0 to 5.0. The infrared system has a zoom range of 150 900mm, a detection band of 8-12μm, and a F-number of 1.2 to 3.0. The MTF of the visible zoom system is above 0.4 at spatial frequency of 45 lp / mm, and the infrared zoom system is above 0.4 at spatial frequency of 11 lp / mm. The design results show that the system has a good image quality.

  13. Near-IR and CP-OCT Imaging of Suspected Occlusal Caries Lesions

    PubMed Central

    Simon, Jacob C.; Kang, Hobin; Staninec, Michal; Jang, Andrew T.; Chan, Kenneth H.; Darling, Cynthia L.; Lee, Robert C.; Fried, Daniel

    2017-01-01

    Introduction Radiographic methods have poor sensitivity for occlusal lesions and by the time the lesions are radiolucent they have typically progressed deep into the dentin. New more sensitive imaging methods are needed to detect occlusal lesions. In this study, cross-polarization optical coherence tomography (CP-OCT) and near-IR imaging were used to image questionable occlusal lesions (QOC's) that were not visible on radiographs but had been scheduled for restoration on 30 test subjects. Methods Near-IR reflectance and transillumination probes incorporating a high definition InGaAs camera and near-IR broadband light sources were used to acquire images of the lesions before restoration. The reflectance probe utilized cross-polarization and operated at wavelengths from 1500–1700-nm where there is an increase in water absorption for higher contrast. The transillumination probe was operated at 1300-nm where the transparency of enamel is highest. Tomographic images (6×6×7 mm3) of the lesions were acquired using a high-speed swept-source CP-OCT system operating at 1300-nm before and after removal of the suspected lesion. Results Near-IR reflectance imaging at 1500–1700-nm yielded significantly higher contrast (p<0.05) of the demineralization in the occlusal grooves compared with visible reflectance imaging. Stains in the occlusal grooves greatly reduced the lesion contrast in the visible range yielding negative values. Only half of the 26 lesions analyzed showed the characteristic surface demineralization and increased reflectivity below the dentinal-enamel junction (DEJ) in 3D OCT images indicative of penetration of the lesion into the dentin. Conclusion This study demonstrates that near-IR imaging methods have great potential for improving the early diagnosis of occlusal lesions. PMID:28339115

  14. Moon - North Pole Mosaic

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This view of the Moon's north pole is a mosaic assembled from 18 images taken by Galileo's imaging system through a green filter as the spacecraft flew by on December 7, 1992. The left part of the Moon is visible from Earth; this region includes the dark, lava-filled Mare Imbrium (upper left); Mare Serenitatis (middle left); Mare Tranquillitatis (lower left), and Mare Crisium, the dark circular feature toward the bottom of the mosaic. Also visible in this view are the dark lava plains of the Marginis and Smythii Basins at the lower right. The Humboldtianum Basin, a 650-kilometer (400-mile) impact structure partly filled with dark volcanic deposits, is seen at the center of the image. The Moon's north pole is located just inside the shadow zone, about a third of the way from the top left of the illuminated region.

  15. Fractured Craters on Ganymede

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Two highly fractured craters are visible in this high resolution image of Jupiter's moon, Ganymede. NASA's Galileo spacecraft imaged this region as it passed Ganymede during its second orbit through the Jovian system. North is to the top of the picture and the sun illuminates the surface from the southeast. The two craters in the center of the image lie in the ancient dark terrain of Marius Regio, at 40 degrees latitude and 201 degrees longitude, at the border of a region of bright grooved terrain known as Byblus Sulcus (the eastern portion of which is visible on the left of this image). Pervasive fracturing has occurred in this area that has completely disrupted these craters and destroyed their southern and western walls. Such intense fracturing has occurred over much of Ganymede's surface and has commonly destroyed older features. The image covers an area approximately 26 kilometers (16 miles) by 18 kilometers (11 miles) across at a resolution of 86 meters (287 feet) per picture element. The image was taken on September 6, 1996 by the solid state imaging (CCD) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  16. Dental magnetic resonance imaging: making the invisible visible.

    PubMed

    Idiyatullin, Djaudat; Corum, Curt; Moeller, Steen; Prasad, Hari S; Garwood, Michael; Nixdorf, Donald R

    2011-06-01

    Clinical dentistry is in need of noninvasive and accurate diagnostic methods to better evaluate dental pathosis. The purpose of this work was to assess the feasibility of a recently developed magnetic resonance imaging (MRI) technique, called SWeep Imaging with Fourier Transform (SWIFT), to visualize dental tissues. Three in vitro teeth, representing a limited range of clinical conditions of interest, imaged using a 9.4T system with scanning times ranging from 100 seconds to 25 minutes. In vivo imaging of a subject was performed using a 4T system with a 10-minute scanning time. SWIFT images were compared with traditional two-dimensional radiographs, three-dimensional cone-beam computed tomography (CBCT) scanning, gradient-echo MRI technique, and histological sections. A resolution of 100 μm was obtained from in vitro teeth. SWIFT also identified the presence and extent of dental caries and fine structures of the teeth, including cracks and accessory canals, which are not visible with existing clinical radiography techniques. Intraoral positioning of the radiofrequency coil produced initial images of multiple adjacent teeth at a resolution of 400 μm. SWIFT MRI offers simultaneous three-dimensional hard- and soft-tissue imaging of teeth without the use of ionizing radiation. Furthermore, it has the potential to image minute dental structures within clinically relevant scanning times. This technology has implications for endodontists because it offers a potential method to longitudinally evaluate teeth where pulp and root structures have been regenerated. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  17. Survey view of EXPRESS Rack 4 in the JPM during Expedition 22

    NASA Image and Video Library

    2009-12-30

    iss022e015852 (12/30/2009) --- The image shows a front view of EXpedite the PRocessing of Experiments to Space Station EXPRESS Rack 4 (Rack 4,JPM/1F5) in the Japanese Experiment Module (JEM) Japanese Pressurized Module (JPM). Equipment visible in the EXPRESS Rack includes the Biotechnology Specimen Temperature Controller (BSTC) and the Gas Supply Module (GSM) support hardware for the CBOSS (Cellular Biotechnology Operations Support Systems) investigations, and the Device for the Study of Critical Liquids and Crystallization (DECLIC). Also visible is the Space Acceleration Measurement System (SAMS) II.

  18. Development of a quantitative assessment method of pigmentary skin disease using ultraviolet optical imaging.

    PubMed

    Lee, Onseok; Park, Sunup; Kim, Jaeyoung; Oh, Chilhwan

    2017-11-01

    The visual scoring method has been used as a subjective evaluation of pigmentary skin disorders. Severity of pigmentary skin disease, especially melasma, is evaluated using a visual scoring method, the MASI (melasma area severity index). This study differentiates between epidermal and dermal pigmented disease. The study was undertaken to determine methods to quantitatively measure the severity of pigmentary skin disorders under ultraviolet illumination. The optical imaging system consists of illumination (white LED, UV-A lamp) and image acquisition (DSLR camera, air cooling CMOS CCD camera). Each camera is equipped with a polarizing filter to remove glare. To analyze images of visible and UV light, images are divided into frontal, cheek, and chin regions of melasma patients. Each image must undergo image processing. To reduce the curvature error in facial contours, a gradient mask is used. The new method of segmentation of front and lateral facial images is more objective for face-area-measurement than the MASI score. Image analysis of darkness and homogeneity is adequate to quantify the conventional MASI score. Under visible light, active lesion margins appear in both epidermal and dermal melanin, whereas melanin is found in the epidermis under UV light. This study objectively analyzes severity of melasma and attempts to develop new methods of image analysis with ultraviolet optical imaging equipment. Based on the results of this study, our optical imaging system could be used as a valuable tool to assess the severity of pigmentary skin disease. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. A High-resolution Multi-wavelength Simultaneous Imaging System with Solar Adaptive Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Changhui; Zhu, Lei; Gu, Naiting

    A high-resolution multi-wavelength simultaneous imaging system from visible to near-infrared bands with a solar adaptive optics system, in which seven imaging channels, including the G band (430.5 nm), the Na i line (589 nm), the H α line (656.3 nm), the TiO band (705.7 nm), the Ca ii IR line (854.2 nm), the He i line (1083 nm), and the Fe i line (1565.3 nm), are chosen, is developed to image the solar atmosphere from the photosphere layer to the chromosphere layer. To our knowledge, this is the solar high-resolution imaging system with the widest spectral coverage. This system wasmore » demonstrated at the 1 m New Vaccum Solar Telescope and the on-sky high-resolution observational results were acquired. In this paper, we will illustrate the design and performance of the imaging system. The calibration and the data reduction of the system are also presented.« less

  20. Digital video system for on-line portal verification

    NASA Astrophysics Data System (ADS)

    Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott

    1990-07-01

    A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.

  1. Design of visible and IR infrared dual-band common-path telescope system

    NASA Astrophysics Data System (ADS)

    Guo, YuLin; Yu, Xun; Tao, Yu; Jiang, Xu

    2018-01-01

    The use of visible and IR infrared dual-band combination can effectively improve the performance of photoelectric detection system,TV and IR system were designed with the common path by the common reflection optical system.A TV/IR infrared common-caliber and common-path system is designed,which can realize the Remote and all-day information.For the 640×512 cooled focal plane array,an infrared middle wave system was presented with a focal length of 600mm F number of 4 field of view(FOV) of 0.38°×0.43°, the system uses optical passive thermal design, has o compact structure and can meet 100% cold shield efficiency,meanwhile it meets the design requirements of lightweight and athermalization. For the 1920×1080 pixels CCD,a visible (TV) system ,which had 500mm focal length, 4F number,was completed.The final optical design along with their modulation transfer function is presented,showing excellent imaging performance in dual-band at the temperature range between -40° and 60°.

  2. Fluorescence-guided tumor visualization using a custom designed NIR attachment to a surgical microscope for high sensitivity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.

    2016-03-01

    Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.

  3. A multimodal 3D framework for fire characteristics estimation

    NASA Astrophysics Data System (ADS)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  4. Improved Cloud Detection Utilizing Defense Meteorological Satellite Program near Infrared Measurements

    DTIC Science & Technology

    1982-01-27

    Visible 3. 3 Ea r th Location, Colocation, and Normalization 4. IMAGE ANALYSIS 4. 1 Interactive Capabilities 4.2 Examples 5. AUTOMATED CLOUD...computer Interactive Data Access System (McIDAS) before image analysis and algorithm development were done. Earth-location is an automated procedure to...the factor l / s in (SSE) toward the gain settings given in Table 5. 4. IMAGE ANALYSIS 4.1 Interactive Capabilities The development of automated

  5. Design of direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging.

    PubMed

    Wang, Lei; Shao, Zhengzheng; Tang, Wusheng; Liu, Jiying; Nie, Qianwen; Jia, Hui; Dai, Suian; Zhu, Jubo; Li, Xiujian

    2017-10-20

    A direct-vision Amici prism is a desired dispersion element in the value of spectrometers and spectral imaging systems. In this paper, we focus on designing a direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging systems. We illustrate a designed structure: E48R/N-SF4/E48R, from which we obtain 13 deg dispersion across the visible spectrum, which is equivalent to 700 line pairs/mm grating. We construct a simulative spectral imaging system with the designed direct-vision cyclo-olefin-polymer double Amici prism in optical design software and compare its imaging performance to a glass double Amici prism in the same system. The results of spot-size RMS demonstrate that the plastic prism can serve as well as their glass competitors and have better spectral resolution.

  6. A Charge Coupled Device Imaging System For Ophthalmology

    NASA Astrophysics Data System (ADS)

    Rowe, R. Wanda; Packer, Samuel; Rosen, James; Bizais, Yves

    1984-06-01

    A digital camera system has been constructed for obtaining reflectance images of the fundus of the eye with monochromatic light. Images at wavelengths in the visible and near infrared regions of the spectrum are recorded by a charge-coupled device array and transferred to a computer. A variety of image processing operations are performed to restore the pictures, correct for distortions in the image formation process, and extract new and diagnostically useful information. The steps involved in calibrating the system to permit quantitative measurement of fundus reflectance are discussed. Three clinically important applications of such a quantitative system are addressed: the characterization of changes in the optic nerve arising from glaucoma, the diagnosis of choroidal melanoma through spectral signatures, and the early detection and improved management of diabetic retinopathy by measurement of retinal tissue oxygen saturation.

  7. Visible continuum pulses based on enhanced dispersive wave generation for endogenous fluorescence imaging.

    PubMed

    Cui, Quan; Chen, Zhongyun; Liu, Qian; Zhang, Zhihong; Luo, Qingming; Fu, Ling

    2017-09-01

    In this study, we demonstrate endogenous fluorescence imaging using visible continuum pulses based on 100-fs Ti:sapphire oscillator and a nonlinear photonic crystal fiber. Broadband (500-700 nm) and high-power (150 mW) continuum pulses are generated through enhanced dispersive wave generation by pumping femtosecond pulses at the anomalous dispersion region near zero-dispersion wavelength of high-nonlinear photonic crystal fibers. We also minimize the continuum pulse width by determining the proper fiber length. The visible-wavelength two-photon microscopy produces NADH and tryptophan images of mice tissues simultaneously. Our 500-700 nm continuum pulses support extending nonlinear microscopy to visible wavelength range that is inaccessible to 100-fs Ti:sapphire oscillators and other applications requiring visible laser pulses.

  8. Comparison of spatial variability in visible and near-infrared spectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1992-01-01

    The visible and near-infrared bands of the Landsat Thematic Mapper (TM) and the Satellite Pour l'Observation de la Terre (SPOT) were analyzed to determine which band contained more spatial variability. It is important for applications that require spatial information, such as those dealing with mapping linear features and automatic image-to-image correlation, to know which spectral band image should be used. Statistical and visual analyses were used in the project. The amount of variance in an 11 by 11 pixel spatial filter and in the first difference at the six spacings of 1, 5, 11, 23, 47, and 95 pixels was computed for the visible and near-infrared bands. The results indicate that the near-infrared band has more spatial variability than the visible band, especially in images covering densely vegetated areas. -Author

  9. Opto-mechanical system design of test system for near-infrared and visible target

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhu, Guodong; Wang, Yuchao

    2014-12-01

    Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.

  10. A Data Exchange Standard for Optical (Visible/IR) Interferometry

    NASA Astrophysics Data System (ADS)

    Pauls, T. A.; Young, J. S.; Cotton, W. D.; Monnier, J. D.

    2005-11-01

    This paper describes the OI (Optical Interferometry) Exchange Format, a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS) and supports the storage of optical interferometric observables, including squared visibility and closure phase-data products not included in radio interferometry standards such as UV-FITS. The format has already gained the support of most currently operating optical interferometer projects, including COAST, NPOI, IOTA, CHARA, VLTI, PTI, and the Keck Interferometer, and is endorsed by the IAU Working Group on Optical Interferometry. Software is available for reading, writing, and the merging of OI Exchange Format files.

  11. Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization

    NASA Astrophysics Data System (ADS)

    Kowalczyk, L.; Goszczynska, H.; Zalewska, E.; Bajera, A.; Krolicki, L.

    2014-04-01

    This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s) missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.

  12. Test Equipment and Method to Characterize a SWIR Digital Imaging System

    DTIC Science & Technology

    2014-06-01

    based on Gallium Arsenide (GaAs) detectors are sensitive in the visible and near infrared (NIR) bands, and used only at night. They produce images from... current from the silicon sensor located on the sphere. The irradiance responsivity, Rn, is the ratio of the silicon detector current and the absolute...silicon detector currents , in accordance with equation 1: ( , ,)[ 2⁄ ] = [] ( ,

  13. Study on ice cloud optical thickness retrieval with MODIS IR spectral bands

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Jun

    2005-01-01

    The operational Moderate-Resolution Imaging Spectroradiometer (MODIS) products for cloud properties such as cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), cloud optical thickness (COT), and cloud phase (CP) have been available for users globally. An approach to retrieve COT is investigated using MODIS infrared (IR) window spectral bands (8.5 mm, 11mm, and 12 mm). The COT retrieval from MODIS IR bands has the potential to provide microphysical properties with high spatial resolution during night. The results are compared with those from operational MODIS products derived from the visible (VIS) and near-infrared (NIR) bands during day. Sensitivity of COT to MODIS spectral brightness temperature (BT) and BT difference (BTD) values is studied. A look-up table is created from the cloudy radiative transfer model accounting for the cloud absorption and scattering for the cloud microphysical property retrieval. The potential applications and limitations are also discussed. This algorithm can be applied to the future imager systems such as Visible/Infrared Imager/Radiometer Suite (VIIRS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and Advanced Baseline Imager (ABI) on the Geostationary Operational Environmental Satellite (GOES)-R.

  14. Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)

    NASA Technical Reports Server (NTRS)

    Frank, L. A.

    1997-01-01

    The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.

  15. Discrete Walsh Hadamard transform based visible watermarking technique for digital color images

    NASA Astrophysics Data System (ADS)

    Santhi, V.; Thangavelu, Arunkumar

    2011-10-01

    As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.

  16. A polarization sensitive hyperspectral imaging system for detection of differences in tissue properties

    NASA Astrophysics Data System (ADS)

    Peller, Joseph A.; Ceja, Nancy K.; Wawak, Amanda J.; Trammell, Susan R.

    2018-02-01

    Polarized light imaging and optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging system that uses differences in the polarization of light reflected from tissue to differentiate between healthy and thermally damaged tissue is discussed. Thermal lesions were created in porcine skin (n = 8) samples using an IR laser. The damaged regions were clearly visible in the polarized light hyperspectral images. Reflectance hyperspectral and white light imaging was also obtained for all tissue samples. Sizes of the thermally damaged regions as measured via polarized light hyperspectral imaging are compared to sizes of these regions as measured in the reflectance hyperspectral images and white light images. Good agreement between the sizes measured by all three imaging modalities was found. Hyperspectral polarized light imaging can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during cancer surgery or pre-surgical biopsy.

  17. Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter

    NASA Technical Reports Server (NTRS)

    Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko; hide

    2015-01-01

    In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.

  18. Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter

    NASA Technical Reports Server (NTRS)

    Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko; hide

    2015-01-01

    In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.

  19. Mustang Complex Fires in Idaho

    NASA Image and Video Library

    2017-12-08

    On August 29, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this nighttime view of wildfires burning in Idaho and Montana. The image was captured by the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as gas flares, auroras, wildfires, city lights, and reflected moonlight. When the image was acquired, the moon was in its waxing gibbous phase, meaning it was more than half-lit, but less than full. Numerous hot spots from the Mustang Complex Fire are visible in northern Idaho. A plume of thick, billowing smoke streams west from the brightest fires near the Idaho-Montana border. The Halstead and Trinity Ridge fires are visible to the south. In addition to the fires, city lights from Boise and other smaller cities appear throughout the image. A bank of clouds is located west of the Mustang Complex, over southeastern Washington and northeastern Oregon. The Operational Line System (OLS)—an earlier generation of night-viewing sensors on the U.S. Defense Meteorological Satellite Program (DMSP) satellites—was also capable of detecting fires at night. But the VIIRS “day-night band” is far better than OLS at resolving them. Each pixel of an VIIRS image shows roughly 740 meters (0.46 miles), compared to the 3-kilometer footprint (1.86 miles) on the OLS system. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Adam Voiland. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Microelectromechanical systems-based visible-near infrared Fabry-Perot tunable filters using quartz substrate

    NASA Astrophysics Data System (ADS)

    Gupta, Neelam; Tan, Songsheng; Zander, Dennis R.

    2012-07-01

    There is a need to develop miniature optical tunable filters for small hyperspectral imagers. We plan to develop a number of miniature Fabry-Perot tunable filters (FPTFs) using microelectromechanical systems (MEMS) technology, each operating over a different wavelength region, to cover spectral regions from the visible to the longwave infrared (IR). Use of a MEMS-based FPTF as a dispersive element will reduce the size, weight, and power requirements of hyperspectral imagers and make them less expensive. A key requirement for such a filter is a large optical aperture. Recently, we succeeded in fabricating FPTFs with a 6 mm optical aperture operating in the visible to near IR spectral region (400 to 800 nm) using commercially available thin quartz wafers as the substrate. The FPTF design contains one fixed silver (Ag) mirror and one electrostatically movable Ag mirror, each grown on a quartz substrate with a low total thickness variation. Gold (Au) bumps are used to control the initial air gap distance between the two mirrors, and Au-Au bonding is used to bond the device. We describe material selection, device design, modeling, fabrication, interferometric, and spectral characterizations.

  1. NASA Sees Major Winter Storm Headed for Eastern U.S.

    NASA Image and Video Library

    2017-12-08

    On Jan. 20 at 2:30 p.m. EST the VIIRS instrument aboard NASA-NOAA's Suomi NPP captured this image of the winter storm moving through the central U.S. Credits: NASA Goddard Rapid Response The low pressure area from the Eastern Pacific Ocean moved into the western U.S. and tracked across the four corners region into Texas where NASA-NOAA's Suomi NPP satellite observed the clouds associated with the storm. The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP satellite captured the visible image on January 20, 2016 at 19:30 UTC (2:30 p.m. EST) when the storm was over the central U.S. In the image, snow cover is visible in the Rockies and southern Great Lakes states. VIIRS collects visible and infrared imagery and global observations of land, atmosphere, cryosphere and oceans. That low pressure system located over the south central United States on Jan. 21 is expected to track east across the Tennessee Valley and will give way to a deepening coastal low pressure area. The National Weather Service said "This latter feature takes over and becomes a dominant force in setting up heavy snow bands over the Mid-Atlantic and very gusty winds." The storm system is expected to bring an increased risk of severe weather from far southeastern Texas across southern Louisiana/Mississippi, and into the far western Florida Panhandle on Thursday, Jan. 21. That threat for severe weather will move east as the low pressure area continues heading in that direction. The National Weather Service Weather Prediction Center in College Park, Maryland said "A potentially crippling winter storm is anticipated for portions of the mid-Atlantic Friday into early Saturday. Snowfall may approach two feet for some locations, including the Baltimore and Washington, D.C. metro areas. Farther north, there is uncertainty in snowfall for the New York City-to-Boston corridor. Farther south, significant icing is likely for portions of Kentucky and North Carolina." NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Walk through screening with multistatic mmW technology

    NASA Astrophysics Data System (ADS)

    Gumbmann, Frank; Ahmed, Sherif Sayed

    2016-10-01

    Active imaging systems for security screening at the airport or other checkpoints have proven to offer good results. Present systems require a specific position and posture,13 or a specific movement2 of the passenger in front of the imaging system. Walk Through Systems (WTS) which screen the passenger while passing the imaging system or a screening hallway would be more pleasant for the passenger and would result in a great improvement in the throughput. Furthermore the detection performance could be enhanced since possible threats are visible from different perspectives and could be tracked within different frames. The combination of all frames is equivalent to a full illumination of the passenger. This paper presents the concept of a WTS basing on a multistatic imaging system in the mmW range. The benefit is that the technology of existing portals can we reused and updated to a WTS. First results are demonstrated with an experimental system.

  3. Early On-Orbit Performance of the Visible Infrared Imaging Radiometer Suite Onboard the Suomi National Polar-Orbiting Partnership (S-NPP) Satellite

    NASA Technical Reports Server (NTRS)

    Cao, Changyong; DeLuccia, Frank J.; Xiong, Xiaoxiong; Wolfe, Robert; Weng, Fuzhong

    2014-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is one of the key environmental remote-sensing instruments onboard the Suomi National Polar-Orbiting Partnership spacecraft, which was successfully launched on October 28, 2011 from the Vandenberg Air Force Base, California. Following a series of spacecraft and sensor activation operations, the VIIRS nadir door was opened on November 21, 2011. The first VIIRS image acquired signifies a new generation of operational moderate resolution-imaging capabilities following the legacy of the advanced very high-resolution radiometer series on NOAA satellites and Terra and Aqua Moderate-Resolution Imaging Spectroradiometer for NASA's Earth Observing system. VIIRS provides significant enhancements to the operational environmental monitoring and numerical weather forecasting, with 22 imaging and radiometric bands covering wavelengths from 0.41 to 12.5 microns, providing the sensor data records for 23 environmental data records including aerosol, cloud properties, fire, albedo, snow and ice, vegetation, sea surface temperature, ocean color, and nigh-time visible-light-related applications. Preliminary results from the on-orbit verification in the postlaunch check-out and intensive calibration and validation have shown that VIIRS is performing well and producing high-quality images. This paper provides an overview of the onorbit performance of VIIRS, the calibration/validation (cal/val) activities and methodologies used. It presents an assessment of the sensor initial on-orbit calibration and performance based on the efforts from the VIIRS-SDR team. Known anomalies, issues, and future calibration efforts, including the long-term monitoring, and intercalibration are also discussed.

  4. Development of high energy micro-tomography system at SPring-8

    NASA Astrophysics Data System (ADS)

    Uesugi, Kentaro; Hoshino, Masato

    2017-09-01

    A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.

  5. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  6. Lander, Airbags, & Martian terrain

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Several objects have been imaged by the Imager for Mars Pathfinder (IMP) during the spacecraft's first day on Mars. Portions of the deflated airbags, part of one the lander's petals, soil, and several rocks are visible. The furrows in the soil were artificially produced by the retraction of the airbags after landing, which occurred at 10:07 a.m. PDT.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

  7. Optical Design of COATLI: A Diffraction-Limited Visible Imager with Fast Guiding and Active Optics Correction

    NASA Astrophysics Data System (ADS)

    Fuentes-Fernández, J.; Cuevas, S.; Watson, A. M.

    2018-04-01

    We present the optical design of COATLI, a two channel visible imager for a comercial 50 cm robotic telescope. COATLI will deliver diffraction-limited images (approximately 0.3 arcsec FWHM) in the riz bands, inside a 4.2 arcmin field, and seeing limited images (approximately 0.6 arcsec FWHM) in the B and g bands, inside a 5 arcmin field, by means of a tip-tilt mirror for fast guiding, and a deformable mirror for active optics, both located on two optically transferred pupil planes. The optical design is based on two collimator-camera systems plus a pupil transfer relay, using achromatic doublets of CaF2 and S-FTM16 and one triplet of N-BK7 and CaF2. We discuss the effciency, tolerancing, thermal behavior and ghosts. COATLI will be installed at the Observatorio Astronómico Nacional in Sierra San Pedro Mártir, Baja California, Mexico, in 2018.

  8. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  9. Passive millimeter wave simulation in blender

    NASA Astrophysics Data System (ADS)

    Murakowski, Maciej

    Imaging in the millimeter wave (mmW) frequency range is being explored for applications where visible or infrared (IR) imaging fails, such as through atmospheric obscurants. However, mmW imaging is still in its infancy and imager systems are still bulky, expensive, and fragile, so experiments on imaging in real-world scenarios are difficult or impossible to perform. Therefore, a simulation system capable of predicting mmW phenomenology would be valuable in determining the requirements (e.g. resolution or noise floor) of an imaging system for a particular scenario and aid in the design of such an imager. Producing simulation software for this purpose is the objective of the work described in this thesis. The 3D software package Blender was modified to simulate the images produced by a passive mmW imager, based on a Geometrical Optics approach. Simulated imagery was validated against experimental data and the software was applied to novel imaging scenarios. Additionally, a database of material properties for use in the simulation was collected.

  10. Candidate cave entrances on Mars

    USGS Publications Warehouse

    Cushing, Glen E.

    2012-01-01

    This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.

  11. Visible and infrared imaging radiometers for ocean observations

    NASA Technical Reports Server (NTRS)

    Barnes, W. L.

    1977-01-01

    The current status of visible and infrared sensors designed for the remote monitoring of the oceans is reviewed. Emphasis is placed on multichannel scanning radiometers that are either operational or under development. Present design practices and parameter constraints are discussed. Airborne sensor systems examined include the ocean color scanner and the ocean temperature scanner. The costal zone color scanner and advanced very high resolution radiometer are reviewed with emphasis on design specifications. Recent technological advances and their impact on sensor design are examined.

  12. Tile survey seen during EVA 3

    NASA Image and Video Library

    2005-08-03

    S114-E-6388 (3 August 2005) --- A close-up view of a portion of the thermal protection tiles on Space Shuttle Discovery’s underside is featured in this image photographed by astronaut Stephen K. Robinson, STS-114 mission specialist, during the mission’s third session of extravehicular activities (EVA). Robinson’s shadow is visible on the thermal protection tiles and a portion of the Canadian-built remote manipulator system (RMS) robotic arm and the Nile River is visible at bottom.

  13. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  14. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †

    PubMed Central

    Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi

    2016-01-01

    During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781

  15. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  16. Archeological Surveys

    NASA Technical Reports Server (NTRS)

    1978-01-01

    NASA remote sensing technology is being employed in archeological studies of the Anasazi Indians, who lived in New Mexico one thousand years ago. Under contract with the National Park Service, NASA's Technology Applications Center at the University of New Mexico is interpreting multispectral scanner data and demonstrating how aerospace scanning techniques can uncover features of prehistoric ruins not visible in conventional aerial photographs. The Center's initial study focused on Chaco Canyon, a pre-Columbia Anasazi site in northeastern New Mexico. Chaco Canyon is a national monument and it has been well explored on the ground and by aerial photography. But the National Park Service was interested in the potential of multispectral scanning for producing evidence of prehistoric roads, field patterns and dwelling areas not discernible in aerial photographs. The multispectral scanner produces imaging data in the invisible as well as the visible portions of the spectrum. This data is converted to pictures which bring out features not visible to the naked eye or to cameras. The Technology Applications Center joined forces with Bendix Aerospace Systems Division, Ann Arbor, Michigan, which provided a scanner-equipped airplane for mapping the Chaco Canyon area. The NASA group processed the scanner images and employed computerized image enhancement techniques to bring out additional detail.

  17. Florida Everglades

    NASA Image and Video Library

    2017-12-08

    A "river of grass" extending south of Lake Okeechobee shows how the area was modified by man with visible areas of dense agriculture, urban sprawl and water conservation areas delineated by a series of waterways that crisscross Southern Florida. The image was created March 18-24, 2013 from the Visible-Infrared Imager/Radiometer Suite (VIIRS) instrument aboard the Suomi National Polar-orbiting Partnership or Suomi NPP satellite, a partnership between NASA and the National Oceanic and Atmospheric Administration, or NOAA. Credit: NASA/NOAA To read more go to: www.nasa.gov/mission_pages/NPP/news/vegetation.html NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  19. Typhoon Neoguri Approaching Japan

    NASA Image and Video Library

    2014-07-09

    NASA's Terra satellite captured this visible image on July 9 at 02:30 UTC (July 8 at 10:30 p.m. EDT) as Typhoon Neoguri was approaching Kyushu, Japan. The visible image revealed that Neoguri's eye had disappeared and the center has become somewhat elongated as the storm weakened into a tropical storm. The Joint Typhoon Warning Center or JTWC noted that an upper level analysis revealed that Neoguri is now in a more harsh environment as northerly vertical wind shear increased to as much as 30 knots. Credit: NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Credit: NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Geologic remote sensing - New technology, new information

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1992-01-01

    Results of geologic studies using data collected by the NASA/JPL Thermal Infrared Imaging Spectrometer (TIMS), Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), and the Airborne Synthetic Aperture Radar (AIRSAR) are discussed. These instruments represent prototypes for the Earth Observing System (EOS) satellite instruments ASTER, High Resolution Imaging Spectrometer (HIRIS), and EOS SAR. Integrated analysis of this data type is one of the keys to successful geologic research using EOS. TIMS links the physical properties of surface materials in the 8-12-*mm region to their composition. Calibrated aircraft data make direct lithological mapping possible. AVIRIS, an analog for HIRIS, provides quantitative information about the surface composition of materials based on their detailed visible and infrared spectral signatures (0.4-2.45 mm). Calibrated AVIRIS data make direct identification of minerals possible. The AIRSAR provides additional complementary information about the surface morphology of rocks and soils.

  1. Combined optical coherence tomography and hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Attendu, Xavier; Guay-Lord, Robin; Strupler, Mathias; Godbout, Nicolas; Boudoux, Caroline

    2017-02-01

    In this proceeding we demonstrate a system combining optical coherence tomography (OCT) and hyper-spectral imaging (HSI) into a single dual-clad fiber (DCF). Combining these modalities gives access to the sample morphology through OCT and to its molecular content through HSI. Both modalities have their illumination through the fiber core. The OCT is then collected through the core while the HSI is collected through the inner cladding of the DCF. A double-clad fiber coupler (DCFC) is used to address both channels separately. A scanning spectral filter was developed to successively inject narrow spectral bands of visible light into the fiber core and sweep across the entire visible spectrum. This allows for rapid HSI acquisition and high miniaturization potential.

  2. Geometric and radiometric preprocessing of airborne visible/infrared imaging spectrometer (AVIRIS) data in rugged terrain for quantitative data analysis

    NASA Technical Reports Server (NTRS)

    Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.

    1994-01-01

    A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).

  3. High-Efficiency, Near-Diffraction Limited, Dielectric Metasurface Lenses Based on Crystalline Titanium Dioxide at Visible Wavelengths.

    PubMed

    Liang, Yaoyao; Liu, Hongzhan; Wang, Faqiang; Meng, Hongyun; Guo, Jianping; Li, Jinfeng; Wei, Zhongchao

    2018-04-28

    Metasurfaces are planar optical elements that hold promise for overcoming the limitations of refractive and conventional diffractive optics. Previous metasurfaces have been limited to transparency windows at infrared wavelengths because of significant optical absorption and loss at visible wavelengths. Here we report a polarization-insensitive, high-contrast transmissive metasurface composed of crystalline titanium dioxide pillars in the form of metalens at the wavelength of 633 nm. The focal spots are as small as 0.54 λ d , which is very close to the optical diffraction limit of 0.5 λ d . The simulation focusing efficiency is up to 88.5%. A rigorous method for metalens design, the phase realization mechanism and the trade-off between high efficiency and small spot size (or large numerical aperture) are discussed. Besides, the metalenses can work well with an imaging point source up to ±15° off axis. The proposed design is relatively systematic and can be applied to various applications such as visible imaging, ranging and sensing systems.

  4. An efficient approach for site-specific scenery prediction in surveillance imaging near Earth's surface

    NASA Astrophysics Data System (ADS)

    Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari

    2006-09-01

    Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.

  5. Multispectral optical telescope alignment testing for a cryogenic space environment

    NASA Astrophysics Data System (ADS)

    Newswander, Trent; Hooser, Preston; Champagne, James

    2016-09-01

    Multispectral space telescopes with visible to long wave infrared spectral bands provide difficult alignment challenges. The visible channels require precision in alignment and stability to provide good image quality in short wavelengths. This is most often accomplished by choosing materials with near zero thermal expansion glass or ceramic mirrors metered with carbon fiber reinforced polymer (CFRP) that are designed to have a matching thermal expansion. The IR channels are less sensitive to alignment but they often require cryogenic cooling for improved sensitivity with the reduced radiometric background. Finding efficient solutions to this difficult problem of maintaining good visible image quality at cryogenic temperatures has been explored with the building and testing of a telescope simulator. The telescope simulator is an onaxis ZERODUR® mirror, CFRP metered set of optics. Testing has been completed to accurately measure telescope optical element alignment and mirror figure changes in a cryogenic space simulated environment. Measured alignment error and mirror figure error test results are reported with a discussion of their impact on system optical performance.

  6. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  7. SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography

    PubMed Central

    Holloway, Jason; Wu, Yicheng; Sharma, Manoj K.; Cossairt, Oliver; Veeraraghavan, Ashok

    2017-01-01

    Synthetic aperture radar is a well-known technique for improving resolution in radio imaging. Extending these synthetic aperture techniques to the visible light domain is not straightforward because optical receivers cannot measure phase information. We propose to use macroscopic Fourier ptychography (FP) as a practical means of creating a synthetic aperture for visible imaging to achieve subdiffraction-limited resolution. We demonstrate the first working prototype for macroscopic FP in a reflection imaging geometry that is capable of imaging optically rough objects. In addition, a novel image space denoising regularization is introduced during phase retrieval to reduce the effects of speckle and improve perceptual quality of the recovered high-resolution image. Our approach is validated experimentally where the resolution of various diffuse objects is improved sixfold. PMID:28439550

  8. Image motion compensation on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Tarbell, T. D.; Duncan, D. W.; Finch, M. L.; Spence, G.

    1981-01-01

    The SOUP experiment on Spacelab 2 includes a 30 cm visible light telescope and focal plane package mounted on the Instrument Pointing System (IPS). Scientific goals of the experiment dictate pointing stability requirements of less than 0.05 arcsecond jitter over periods of 5-20 seconds. Quantitative derivations of these requirements from two different aspects are presented: (1) avoidance of motion blurring of diffraction-limited images; (2) precise coalignment of consecutive frames to allow measurement of small image differences. To achieve this stability, a fine guider system capable of removing residual jitter of the IPS and image motions generated on the IPS cruciform instrument support structure has been constructed. This system uses solar limb detectors in the prime focal plane to derive an error signal. Image motion due to pointing errors is compensated by the agile secondary mirror mounted on piezoelectric transducers, controlled by a closed-loop servo system.

  9. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  10. Adaptive optical filter

    DOEpatents

    Whittemore, Stephen Richard

    2013-09-10

    Imaging systems include a detector and a spatial light modulator (SLM) that is coupled so as to control image intensity at the detector based on predetermined detector limits. By iteratively adjusting SLM element values, image intensity at one or all detector elements or portions of an imaging detector can be controlled to be within limits. The SLM can be secured to the detector at a spacing such that the SLM is effectively at an image focal plane. In some applications, the SLM can be adjusted to impart visible or hidden watermarks to images or to reduce image intensity at one or a selected set of detector elements so as to reduce detector blooming

  11. A fast fusion scheme for infrared and visible light images in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhao, Chunhui; Guo, Yunting; Wang, Yulei

    2015-09-01

    Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.

  12. Morning Frost on Martian Surface

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A thin layer of water frost is visible on the ground around NASA's Phoenix Mars Lander in this image taken by the Surface Stereo Imager at 6 a.m. on Sol 79 (August 14, 2008), the 79th Martian day after landing. The frost begins to disappear shortly after 6 a.m. as the sun rises on the Phoenix landing site.

    The sun was about 22 degrees above the horizon when the image was taken, enhancing the detail of the polygons, troughs and rocks around the landing site.

    This view is looking east southeast with the lander's eastern solar panel visible in the bottom lefthand corner of the image. The rock in the foreground is informally named 'Quadlings' and the rock near center is informally called 'Winkies.'

    This false color image has been enhanced to show color variations.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  13. Use of the variable gain settings on SPOT

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    Often the brightness or digital number (DN) range of satellite image data is less than optimal and uses only a portion of the available values (0 to 255) because the range of reflectance values is small. Most imaging systems have been designed with only two gain settings, normal and high. The SPOT High Resolution Visible (HRV) imaging system has the capability to collect image data using one of eight different gain settings. With the proper procedure this allows the brightness or reflectance resolution, which is directly related to the range of DN values recorded, to be optimized for any given site as compared to using a single set of gain settings everywhere. -from Author

  14. Counter Unmanned Aerial Systems Testing: Evaluation of VIS SWIR MWIR and LWIR passive imagers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel Carlisle; Woo, Bryana Lynn

    This report contains analysis of unmanned aerial systems as imaged by visible, short-wave infrared, mid-wave infrared, and long-wave infrared passive devices. Testing was conducted at the Nevada National Security Site (NNSS) during the week of August 15, 2016. Target images in all spectral bands are shown and contrast versus background is reported. Calculations are performed to determine estimated pixels-on-target for detection and assessment levels, and the number of pixels needed to cover a hemisphere for detection or assessment at defined distances. Background clutter challenges are qualitatively discussed for different spectral bands, and low contrast scenarios are highlighted for long-wave infraredmore » imagers.« less

  15. Secure Image Transmission over DFT-precoded OFDM-VLC systems based on Chebyshev Chaos scrambling

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Qiu, Weiwei

    2017-08-01

    This paper proposes a physical layer image secure transmission scheme for discrete Fourier transform (DFT) precoded OFDM-based visible light communication systems by using Chebyshev chaos maps. In the proposed scheme, 256 subcarriers and QPSK modulation are employed. The transmitted digital signal of the image is encrypted with a Chebyshev chaos sequence. The encrypted signal is then transformed by a DFT precoding matrix to reduce the PAPR of the OFDM signal. After that, the encrypted and DFT-precoded OFDM are transmitted over a VLC channel. The simulation results show that the proposed image security transmission scheme can not only protect the DFT-precoded OFDM-based VLC from eavesdroppers but also improve BER performance.

  16. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  17. Partly Cloudy on Pluto?

    NASA Image and Video Library

    2016-10-18

    Pluto's present, hazy atmosphere is almost entirely free of clouds, though scientists from NASA's New Horizons mission have identified some cloud candidates after examining images taken by the New Horizons Long Range Reconnaissance Imager and Multispectral Visible Imaging Camera, during the spacecraft's July 2015 flight through the Pluto system. All are low-lying, isolated small features -- no broad cloud decks or fields -- and while none of the features can be confirmed with stereo imaging, scientists say they are suggestive of possible, rare condensation clouds. http://photojournal.jpl.nasa.gov/catalog/PIA21127

  18. Fluorometric Biosniffer Camera "Sniff-Cam" for Direct Imaging of Gaseous Ethanol in Breath and Transdermal Vapor.

    PubMed

    Arakawa, Takahiro; Sato, Toshiyuki; Iitani, Kenta; Toma, Koji; Mitsubayashi, Kohji

    2017-04-18

    Various volatile organic compounds can be found in human transpiration, breath and body odor. In this paper, a novel two-dimensional fluorometric imaging system, known as a "sniffer-cam" for ethanol vapor released from human breath and palm skin was constructed and validated. This imaging system measures ethanol vapor concentrations as intensities of fluorescence through an enzymatic reaction induced by alcohol dehydrogenase (ADH). The imaging system consisted of multiple ultraviolet light emitting diode (UV-LED) excitation sheet, an ADH enzyme immobilized mesh substrate and a high-sensitive CCD camera. This imaging system uses ADH for recognition of ethanol vapor. It measures ethanol vapor by measuring fluorescence of nicotinamide adenine dinucleotide (NADH), which is produced by an enzymatic reaction on the mesh. This NADH fluorometric imaging system achieved the two-dimensional real-time imaging of ethanol vapor distribution (0.5-200 ppm). The system showed rapid and accurate responses and a visible measurement, which could lead to an analysis of metabolism function at real time in the near future.

  19. Spherical grating based x-ray Talbot interferometry.

    PubMed

    Cong, Wenxiang; Xi, Yan; Wang, Ge

    2015-11-01

    Grating interferometry is a state-of-the-art x-ray imaging approach, which can acquire information on x-ray attenuation, phase shift, and small-angle scattering simultaneously. Phase-contrast imaging and dark-field imaging are very sensitive to microstructural variation and offers superior contrast resolution for biological soft tissues. However, a common x-ray tube is a point-like source. As a result, the popular planar grating imaging configuration seriously restricts the flux of photons and decreases the visibility of signals, yielding a limited field of view. The purpose of this study is to extend the planar x-ray grating imaging theory and methods to a spherical grating scheme for a wider range of preclinical and clinical applications. A spherical grating matches the wave front of a point x-ray source very well, allowing the perpendicular incidence of x-rays on the grating to achieve a higher visibility over a larger field of view than the planer grating counterpart. A theoretical analysis of the Talbot effect for spherical grating imaging is proposed to establish a basic foundation for x-ray spherical gratings interferometry. An efficient method of spherical grating imaging is also presented to extract attenuation, differential phase, and dark-field images in the x-ray spherical grating interferometer. Talbot self-imaging with spherical gratings is analyzed based on the Rayleigh-Sommerfeld diffraction formula, featuring a periodic angular distribution in a polar coordinate system. The Talbot distance is derived to reveal the Talbot self-imaging pattern. Numerical simulation results show the self-imaging phenomenon of a spherical grating interferometer, which is in agreement with the theoretical prediction. X-ray Talbot interferometry with spherical gratings has a significant practical promise. Relative to planar grating imaging, spherical grating based x-ray Talbot interferometry has a larger field of view and improves both signal visibility and dose utilization for pre-clinical and clinical applications.

  20. Spherical grating based x-ray Talbot interferometry

    PubMed Central

    Cong, Wenxiang; Xi, Yan; Wang, Ge

    2015-01-01

    Purpose: Grating interferometry is a state-of-the-art x-ray imaging approach, which can acquire information on x-ray attenuation, phase shift, and small-angle scattering simultaneously. Phase-contrast imaging and dark-field imaging are very sensitive to microstructural variation and offers superior contrast resolution for biological soft tissues. However, a common x-ray tube is a point-like source. As a result, the popular planar grating imaging configuration seriously restricts the flux of photons and decreases the visibility of signals, yielding a limited field of view. The purpose of this study is to extend the planar x-ray grating imaging theory and methods to a spherical grating scheme for a wider range of preclinical and clinical applications. Methods: A spherical grating matches the wave front of a point x-ray source very well, allowing the perpendicular incidence of x-rays on the grating to achieve a higher visibility over a larger field of view than the planer grating counterpart. A theoretical analysis of the Talbot effect for spherical grating imaging is proposed to establish a basic foundation for x-ray spherical gratings interferometry. An efficient method of spherical grating imaging is also presented to extract attenuation, differential phase, and dark-field images in the x-ray spherical grating interferometer. Results: Talbot self-imaging with spherical gratings is analyzed based on the Rayleigh–Sommerfeld diffraction formula, featuring a periodic angular distribution in a polar coordinate system. The Talbot distance is derived to reveal the Talbot self-imaging pattern. Numerical simulation results show the self-imaging phenomenon of a spherical grating interferometer, which is in agreement with the theoretical prediction. Conclusions: X-ray Talbot interferometry with spherical gratings has a significant practical promise. Relative to planar grating imaging, spherical grating based x-ray Talbot interferometry has a larger field of view and improves both signal visibility and dose utilization for pre-clinical and clinical applications. PMID:26520741

  1. Spherical grating based x-ray Talbot interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cong, Wenxiang, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu; Xi, Yan, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu; Wang, Ge, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu

    2015-11-15

    Purpose: Grating interferometry is a state-of-the-art x-ray imaging approach, which can acquire information on x-ray attenuation, phase shift, and small-angle scattering simultaneously. Phase-contrast imaging and dark-field imaging are very sensitive to microstructural variation and offers superior contrast resolution for biological soft tissues. However, a common x-ray tube is a point-like source. As a result, the popular planar grating imaging configuration seriously restricts the flux of photons and decreases the visibility of signals, yielding a limited field of view. The purpose of this study is to extend the planar x-ray grating imaging theory and methods to a spherical grating scheme formore » a wider range of preclinical and clinical applications. Methods: A spherical grating matches the wave front of a point x-ray source very well, allowing the perpendicular incidence of x-rays on the grating to achieve a higher visibility over a larger field of view than the planer grating counterpart. A theoretical analysis of the Talbot effect for spherical grating imaging is proposed to establish a basic foundation for x-ray spherical gratings interferometry. An efficient method of spherical grating imaging is also presented to extract attenuation, differential phase, and dark-field images in the x-ray spherical grating interferometer. Results: Talbot self-imaging with spherical gratings is analyzed based on the Rayleigh–Sommerfeld diffraction formula, featuring a periodic angular distribution in a polar coordinate system. The Talbot distance is derived to reveal the Talbot self-imaging pattern. Numerical simulation results show the self-imaging phenomenon of a spherical grating interferometer, which is in agreement with the theoretical prediction. Conclusions: X-ray Talbot interferometry with spherical gratings has a significant practical promise. Relative to planar grating imaging, spherical grating based x-ray Talbot interferometry has a larger field of view and improves both signal visibility and dose utilization for pre-clinical and clinical applications.« less

  2. Hyperspectral reflectance and fluorescence line-scan imaging system for online detection of fecal contamination on apples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Cho, Byoung-Kwan; Yang, Chun-Chieh; Chao, Kaunglin; Lefcourt, Alan M.; Chen, Yud-Ren

    2006-10-01

    We have developed nondestructive opto-electronic imaging techniques for rapid assessment of safety and wholesomeness of foods. A recently developed fast hyperspectral line-scan imaging system integrated with a commercial apple-sorting machine was evaluated for rapid detection of animal feces matter on apples. Apples obtained from a local orchard were artificially contaminated with cow feces. For the online trial, hyperspectral images with 60 spectral channels, reflectance in the visible to near infrared regions and fluorescence emissions with UV-A excitation, were acquired from apples moving at a processing sorting-line speed of three apples per second. Reflectance and fluorescence imaging required a passive light source, and each method used independent continuous wave (CW) light sources. In this paper, integration of the hyperspectral imaging system with the commercial applesorting machine and preliminary results for detection of fecal contamination on apples, mainly based on the fluorescence method, are presented.

  3. Jupiter's Southern Hemisphere in the Near-Infrared (Time Set 2)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mosaic of Jupiter's southern hemisphere between -25 and -80 degrees (south) latitude. In time sequence two, taken nine hours after sequence one, the limb is visible near the bottom right part of the mosaic. The curved border near the bottom left indicates the location of Jupiter's day/night terminator.

    Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the brightness and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including two large vortices, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two vortices in the center of the mosaic is about 3500 kilometers. The right oval is rotating counterclockwise, like other anticyclonic bright vortices in Jupiter's atmosphere. The left vortex is a cyclonic (clockwise) vortex. The differences between them (their brightness, their symmetry, and their behavior) are clues to how Jupiter's atmosphere works. The cloud features visible at 756 nanometers (near-infrared light) are at an atmospheric pressure level of about 1 bar.

    North is at the top. The images are projected onto a sphere, with features being foreshortened towards the south and east. The smallest resolved features are tens of kilometers in size. These images were taken on May 7, 1997, at a range of 1.5 million kilometers by the Solid State Imaging system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  4. Jupiter's Southern Hemisphere in the Near-Infrared (Time Set 3)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mosaic of Jupiter's southern hemisphere between -25 and -80 degrees (south) latitude. In time sequence three, taken 10 hours after sequence one, the limb is visible near the bottom right part of the mosaic.

    Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the brightness and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including two large vortices, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two vortices in the center of the mosaic is about 3500 kilometers. The right oval is rotating counterclockwise, like other anticyclonic bright vortices in Jupiter's atmosphere. The left vortex is a cyclonic (clockwise) vortex. The differences between them (their brightness, their symmetry, and their behavior) are clues to how Jupiter's atmosphere works. The cloud features visible at 756 nanometers (near-infrared light) are at an atmospheric pressure level of about 1 bar.

    North is at the top. The images are projected onto a sphere, with features being foreshortened towards the south and east. The smallest resolved features are tens of kilometers in size. These images were taken on May 7, 1997, at a range of 1.5 million kilometers by the Solid State Imaging system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  5. Hubble Provides Infrared View of Jupiter's Moon, Ring, and Clouds

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Probing Jupiter's atmosphere for the first time, the Hubble Space Telescope's new Near Infrared Camera and Multi-Object Spectrometer (NICMOS) provides a sharp glimpse of the planet's ring, moon, and high-altitude clouds.

    The presence of methane in Jupiter's hydrogen- and helium-rich atmosphere has allowed NICMOS to plumb Jupiter's atmosphere, revealing bands of high-altitude clouds. Visible light observations cannot provide a clear view of these high clouds because the underlying clouds reflect so much visible light that the higher level clouds are indistinguishable from the lower layer. The methane gas between the main cloud deck and the high clouds absorbs the reflected infrared light, allowing those clouds that are above most of the atmosphere to appear bright. Scientists will use NICMOS to study the high altitude portion of Jupiter's atmosphere to study clouds at lower levels. They will then analyze those images along with visible light information to compile a clearer picture of the planet's weather. Clouds at different levels tell unique stories. On Earth, for example, ice crystal (cirrus) clouds are found at high altitudes while water (cumulus) clouds are at lower levels.

    Besides showing details of the planet's high-altitude clouds, NICMOS also provides a clear view of the ring and the moon, Metis. Jupiter's ring plane, seen nearly edge-on, is visible as a faint line on the upper right portion of the NICMOS image. Metis can be seen in the ring plane (the bright circle on the ring's outer edge). The moon is 25 miles wide and about 80,000 miles from Jupiter.

    Because of the near-infrared camera's narrow field of view, this image is a mosaic constructed from three individual images taken Sept. 17, 1997. The color intensity was adjusted to accentuate the high-altitude clouds. The dark circle on the disk of Jupiter (center of image) is an artifact of the imaging system.

    This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/

  6. MSTI-3 sensor package optical design

    NASA Astrophysics Data System (ADS)

    Horton, Richard F.; Baker, William G.; Griggs, Michael; Nguyen, Van; Baker, H. Vernon

    1995-06-01

    The MSTI-3 sensor package is a three band imaging telescope for military and dual use sensing missions. The MSTI-3 mission is one of the Air Force Phillips Laboratory's Pegasus launched space missions, a third in the series of state-of-the-art lightweight sensors on low cost satellites. The satellite is planned for launch into a 425 Km orbit in late 1995. The MSTI- 3 satellite is configured with a down looking two axis gimbal and gimbal mirror. The gimbal mirror is an approximately 13 cm by 29 cm mirror which allows a field of regard approximately 100 degrees by 180 degrees. The optical train uses several novel optical features to allow for compactness and light weight. A 105 mm Ritchey Chretien Cassegrain imaging system with a CaF(subscript 2) dome astigmatism corrector is followed by a CaF(subscript 2) beamsplitter cube assembly at the systems first focus. The dichroic beamsplitter cube assembly separates the light into a visible and two IR channels of approximately 2.5 to 3.3, (SWIR), and 3.5 to 4.5, (MWIR), micron wavelength bands. The two IR imaging channels each consist of unity power re-imaging lens cluster, a cooled seven position filter wheel, a cooled Lyot stop and an Amber 256 X 256 InSb array camera. The visible channel uses a unity power re- imaging system prior to a linear variable filter with a Sony CCD array, which allows for a multispectral imaging capability in the 0.5 to 0.8 micron region. The telescope field of view is 1.4 degrees square.

  7. Fiber optic-based optical coherence tomography (OCT) for dental applications

    NASA Astrophysics Data System (ADS)

    Everett, Matthew J.; Colston, Bill W., Jr.; Da Silva, Luiz B.; Otis, Linda L.

    1998-09-01

    We have developed a hand-held fiber optic based optical coherence tomography (OCT) system for scanning of the oral cavity. We have produced, using this scanning device, in vivo cross-sectional images of hard and soft dental tissues in human volunteers. Clinically relevant anatomical structures, including the gingival margin, periodontal sulcus, and dento- enamel junction, were visible in all the images. The cemento- enamel junction and the alveolar bone were identified in approximately two thirds of the images. These images represent, or our knowledge, the first in vivo OCT images of human dental tissue.

  8. Looking at Earth from Space: Teacher's Guide with Activities for Earth and Space Science

    NASA Technical Reports Server (NTRS)

    Steele, Colleen (Editor); Steele, Colleen; Ryan, William F.

    1995-01-01

    The Maryland Pilot Earth Science and Technology Education Network (MAPS-NET) project was sponsored by the National Aeronautics and Space Administration (NASA) to enrich teacher preparation and classroom learning in the area of Earth system science. This publication includes a teacher's guide that replicates material taught during a graduate-level course of the project and activities developed by the teachers. The publication was developed to provide teachers with a comprehensive approach to using satellite imagery to enhance science education. The teacher's guide is divided into topical chapters and enables teachers to expand their knowledge of the atmosphere, common weather patterns, and remote sensing. Topics include: weather systems and satellite imagery including mid-latitude weather systems; wave motion and the general circulation; cyclonic disturbances and baroclinic instability; clouds; additional common weather patterns; satellite images and the internet; environmental satellites; orbits; and ground station set-up. Activities are listed by suggested grade level and include the following topics: using weather symbols; forecasting the weather; cloud families and identification; classification of cloud types through infrared Automatic Picture Transmission (APT) imagery; comparison of visible and infrared imagery; cold fronts; to ski or not to ski (imagery as a decision making tool), infrared and visible satellite images; thunderstorms; looping satellite images; hurricanes; intertropical convergence zone; and using weather satellite images to enhance a study of the Chesapeake Bay. A list of resources is also included.

  9. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging

    PubMed Central

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-01-01

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555

  10. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging.

    PubMed

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-10-18

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.

  11. A Stellar Ripple

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This false-color composite image shows the Cartwheel galaxy as seen by the Galaxy Evolution Explorer's far ultraviolet detector (blue); the Hubble Space Telescope's wide field and planetary camera 2 in B-band visible light (green); the Spitzer Space Telescope's infrared array camera at 8 microns (red); and the Chandra X-ray Observatory's advanced CCD imaging spectrometer-S array instrument (purple).

    Approximately 100 million years ago, a smaller galaxy plunged through the heart of Cartwheel galaxy, creating ripples of brief star formation. In this image, the first ripple appears as an ultraviolet-bright blue outer ring. The blue outer ring is so powerful in the Galaxy Evolution Explorer observations that it indicates the Cartwheel is one of the most powerful UV-emitting galaxies in the nearby universe. The blue color reveals to astronomers that associations of stars 5 to 20 times as massive as our sun are forming in this region. The clumps of pink along the outer blue ring are regions where both X-rays and ultraviolet radiation are superimposed in the image. These X-ray point sources are very likely collections of binary star systems containing a blackhole (called massive X-ray binary systems). The X-ray sources seem to cluster around optical/ultraviolet-bright supermassive star clusters.

    The yellow-orange inner ring and nucleus at the center of the galaxy result from the combination of visible and infrared light, which is stronger towards the center. This region of the galaxy represents the second ripple, or ring wave, created in the collision, but has much less star formation activity than the first (outer) ring wave. The wisps of red spread throughout the interior of the galaxy are organic molecules that have been illuminated by nearby low-level star formation. Meanwhile, the tints of green are less massive, older visible-light stars.

    Although astronomers have not identified exactly which galaxy collided with the Cartwheel, two of three candidate galaxies can be seen in this image to the bottom left of the ring, one as a neon blob and the other as a green spiral.

    Previously, scientists believed the ring marked the outermost edge of the galaxy, but the latest GALEX observations detect a faint disk, not visible in this image, that extends to twice the diameter of the ring.

  12. Method for imaging a concealed object

    DOEpatents

    Davidson, James R [Idaho Falls, ID; Partin, Judy K [Idaho Falls, ID; Sawyers, Robert J [Idaho Falls, ID

    2007-07-03

    A method for imaging a concealed object is described and which includes a step of providing a heat radiating body, and wherein an object to be detected is concealed on the heat radiating body; imaging the heat radiating body to provide a visibly discernible infrared image of the heat radiating body; and determining if the visibly discernible infrared image of the heat radiating body is masked by the presence of the concealed object.

  13. The visible human project®: From body to bits.

    PubMed

    Ackerman, Michael J

    2016-08-01

    In the middle 1990's the U.S. National Library sponsored the acquisition and development of the Visible Human Project® data base. This image database contains anatomical cross-sectional images which allow the reconstruction of three dimensional male and female anatomy to an accuracy of less than 1.0 mm. The male anatomy is contained in a 15 gigabyte database, the female in a 39 gigabyte database. This talk will describe why and how this project was accomplished and demonstrate some of the products which the Visible Human dataset has made possible. I will conclude by describing how the Visible Human Project, completed over 20 years ago, has led the National Library of Medicine to a series of image research projects including an open source image processing toolkit which is included in several commercial products.

  14. Fluorescence optical imaging in anticancer drug delivery.

    PubMed

    Etrych, Tomáš; Lucas, Henrike; Janoušková, Olga; Chytil, Petr; Mueller, Thomas; Mäder, Karsten

    2016-03-28

    In the past several decades, nanosized drug delivery systems with various targeting functions and controlled drug release capabilities inside targeted tissues or cells have been intensively studied. Understanding their pharmacokinetic properties is crucial for the successful transition of this research into clinical practice. Among others, fluorescence imaging has become one of the most commonly used imaging tools in pre-clinical research. The development of increasing numbers of suitable fluorescent dyes excitable in the visible to near-infrared wavelengths of the spectrum has significantly expanded the applicability of fluorescence imaging. This paper focuses on the potential applications and limitations of non-invasive imaging techniques in the field of drug delivery, especially in anticancer therapy. Fluorescent imaging at both the cellular and systemic levels is discussed in detail. Additionally, we explore the possibility for simultaneous treatment and imaging using theranostics and combinations of different imaging techniques, e.g., fluorescence imaging with computed tomography. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2002-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects (MLOs) that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist

  16. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2001-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist the

  17. Energy and Technology Review

    NASA Astrophysics Data System (ADS)

    Poggio, Andrew J.

    1988-10-01

    This issue of Energy and Technology Review contains: Neutron Penumbral Imaging of Laser-Fusion Targets--using our new penumbral-imaging diagnostic, we have obtained the first images that can be used to measure directly the deuterium-tritium burn region in laser-driven fusion targets; Computed Tomography for Nondestructive Evaluation--various computed tomography systems and computational techniques are used in nondestructive evaluation; Three-Dimensional Image Analysis for Studying Nuclear Chromatin Structure--we have developed an optic-electronic system for acquiring cross-sectional views of cell nuclei, and computer codes to analyze these images and reconstruct the three-dimensional structures they represent; Imaging in the Nuclear Test Program--advanced techniques produce images of unprecedented detail and resolution from Nevada Test Site data; and Computational X-Ray Holography--visible-light experiments and numerically simulated holograms test our ideas about an X-ray microscope for biological research.

  18. An infra-red imaging system for the analysis of tropisms in Arabidopsis thaliana seedlings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orbovic, V.; Poff, K.L.

    1990-05-01

    Since blue and green light will induce phototropism and red light is absorbed by phytochrome, no wavelength of visible radiation should be considered safe for any study of tropisms in etiolated seedlings. For this reason, we have developed an infra-red imaging system with a video camera with which we can monitor seedlings using radiation at wavelengths longer than 800 nm. The image of the seedlings can be observed in real time, recorded on a VCR and subsequently analyzed using the Java image analysis system. The time courses for curvature of seedlings differ in shape, amplitude, and lag time. This variabilitymore » accounts for much of the noise in the measurement of curvature for a population of seedlings.« less

  19. Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system

    NASA Astrophysics Data System (ADS)

    Gao, Shengkui; Mondal, Suman B.; Zhu, Nan; Liang, RongGuang; Achilefu, Samuel; Gruev, Viktor

    2015-01-01

    Near infrared (NIR) fluorescence imaging has shown great potential for various clinical procedures, including intraoperative image guidance. However, existing NIR fluorescence imaging systems either have a large footprint or are handheld, which limits their usage in intraoperative applications. We present a compact NIR fluorescence imaging system (NFIS) with an image overlay solution based on threshold detection, which can be easily integrated with a goggle display system for intraoperative guidance. The proposed NFIS achieves compactness, light weight, hands-free operation, high-precision superimposition, and a real-time frame rate. In addition, the miniature and ultra-lightweight light-emitting diode tracking pod is easy to incorporate with NIR fluorescence imaging. Based on experimental evaluation, the proposed NFIS solution has a lower detection limit of 25 nM of indocyanine green at 27 fps and realizes a highly precise image overlay of NIR and visible images of mice in vivo. The overlay error is limited within a 2-mm scale at a 65-cm working distance, which is highly reliable for clinical study and surgical use.

  20. Winds Near Jupiter's Belt-Zone Boundary

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Time Sequence of a belt-zone boundary near Jupiter's equator. These mosaics show Jupiter's appearance at 757 nanometers (near-infrared) and were taken nine hours apart. Images at 757 nanometers show features of Jupiter's primary visible cloud deck.

    Jupiter's atmospheric circulation is dominated by alternating jets of east/west (zonal) winds. The bands have different widths and wind speeds but have remained constant as long as telescopes and spacecraft have measured them. A strong eastward jet is made visible as it stretches the clouds just below the center of this mosaic. The maximum wind speed of this jet is 128 meters per second (286 miles per hour). Features on this jet move about one quarter of the width of the mosaic. All the features visible in these mosaics are moving eastward (right).

    North is at the top. The mosaic covers latitudes -13 to +3 degrees and is centered at longitude 282 degrees West. The smallest resolved features are tens of kilometers in size. These images were taken on November 5th, 1996, at a range of 1.2 million kilometers by the Solid State Imaging system aboard NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  1. Full-view 3D imaging system for functional and anatomical screening of the breast

    NASA Astrophysics Data System (ADS)

    Oraevsky, Alexander; Su, Richard; Nguyen, Ha; Moore, James; Lou, Yang; Bhadra, Sayantan; Forte, Luca; Anastasio, Mark; Yang, Wei

    2018-04-01

    Laser Optoacoustic Ultrasonic Imaging System Assembly (LOUISA-3D) was developed in response to demand of diagnostic radiologists for an advanced screening system for the breast to improve on low sensitivity of x-ray based modalities of mammography and tomosynthesis in the dense and heterogeneous breast and low specificity magnetic resonance imaging. It is our working hypothesis that co-registration of quantitatively accurate functional images of the breast vasculature and microvasculature, and anatomical images of breast morphological structures will provide a clinically viable solution for the breast cancer care. Functional imaging is LOUISA-3D is enabled by the full view 3D optoacoustic images acquired at two rapidly toggling laser wavelengths in the near-infrared spectral range. 3D images of the breast anatomical background is enabled in LOUISA-3D by a sequence of B-mode ultrasound slices acquired with a transducer array rotating around the breast. This creates the possibility to visualize distributions of the total hemoglobin and blood oxygen saturation within specific morphological structures such as tumor angiogenesis microvasculature and larger vasculature in proximity of the tumor. The system has four major components: (i) a pulsed dual wavelength laser with fiberoptic light delivery system, (ii) an imaging module with two arc shaped probes (optoacoustic and ultrasonic) placed in a transparent bowl that rotates around the breast, (iii) a multichannel electronic system with analog preamplifiers and digital data acquisition boards, and (iv) computer for the system control, data processing and image reconstruction. The most important advancement of this latest system design compared with previously reported systems is the full breast illumination accomplished for each rotational step of the optoacoustic transducer array using fiberoptic illuminator rotating around the breast independently from rotation of the detector probe. We report here a pilot case studies on one healthy volunteer and on patient with a suspicious small lesion in the breast. LOUISA3D visualized deoxygenated veins and oxygenated arteries of a healthy volunteer, indicative of its capability to visualize hypoxic microvasculature in cancerous tumors. A small lesion detected on optoacoustic image of a patient was not visible on ultrasound, potentially indicating high system sensitivity of the optoacoustic subsystem to small but aggressively growing cancerous lesions with high density angiogenesis microvasculature. The main breast vasculature (0.5-1 mm) was visible at depth of up to 40-mm with 0.3-mm resolution. The results of LOUISA-3D pilot clinical validation demonstrated the system readiness for statistically significant clinical feasibility study.

  2. Predicting Visibility of Aircraft

    PubMed Central

    Watson, Andrew; Ramirez, Cesar V.; Salud, Ellen

    2009-01-01

    Visual detection of aircraft by human observers is an important element of aviation safety. To assess and ensure safety, it would be useful to be able to be able to predict the visibility, to a human observer, of an aircraft of specified size, shape, distance, and coloration. Examples include assuring safe separation among aircraft and between aircraft and unmanned vehicles, design of airport control towers, and efforts to enhance or suppress the visibility of military and rescue vehicles. We have recently developed a simple metric of pattern visibility, the Spatial Standard Observer (SSO). In this report we examine whether the SSO can predict visibility of simulated aircraft images. We constructed a set of aircraft images from three-dimensional computer graphic models, and measured the luminance contrast threshold for each image from three human observers. The data were well predicted by the SSO. Finally, we show how to use the SSO to predict visibility range for aircraft of arbitrary size, shape, distance, and coloration. PMID:19462007

  3. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  4. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.

    PubMed

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-04-11

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.

  5. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    PubMed Central

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  6. Measurements of SWIR backgrounds using the swux unit of measure

    NASA Astrophysics Data System (ADS)

    Richards, A.; Hübner, M.; Vollmer, M.

    2018-04-01

    The SWIR waveband between 0.8μm-1.8μm is getting increasingly exploited by imaging systems in a variety of different applications, including persistent imaging for security and surveillance of high-value assets, handheld tactical imagers, range-gated imaging systems and imaging LADAR for driverless vehicles. The vast majority of these applications utilize lattice-matched InGaAs detectors in their imaging sensors, and these sensors are rapidly falling in price, leading to their widening adoption. As these sensors are used in novel applications and locations, it is important that ambient SWIR backgrounds be understood and characterized for a variety of different field conditions, primarily for the purposes of system performance modeling of SNR and range metrics. SWIR irradiance backgrounds do not consistently track visible-light illumination at all. There is currently little of this type of information in the open literature, particularly measurements of SWIR backgrounds in urban areas, natural areas, or indoors. This paper presents field measurements done with an InGaAs detector calibrated in the swux unit of InGaAs-band-specific irradiance proposed by two of the authors in 2017. Simultaneous measurements of illuminance levels (in lux) at these sites are presented, as well as visible and InGaAs camera images of the scenery at some of these measurement sites. The swux and lux measurement hardware is described, along with the methods used to calibrate it. Finally, the swux levels during the partial and total phases of the total solar eclipse of 2017 are presented, along with curves fitted to the data from a theoretical model, based on obscuration of the sun by the moon. The apparent differences between photometric and swux measurements will be discussed.

  7. Space infrared telescope pointing control system. Infrared telescope tracking in the presence of target motion

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Schneider, J. B.

    1986-01-01

    The use of charge-coupled-devices, or CCD's, has been documented by a number of sources as an effective means of providing a measurement of spacecraft attitude with respect to the stars. A method exists of defocussing and interpolation of the resulting shape of a star image over a small subsection of a large CCD array. This yields an increase in the accuracy of the device by better than an order of magnitude over the case when the star image is focussed upon a single CCD pixel. This research examines the effect that image motion has upon the overall precision of this star sensor when applied to an orbiting infrared observatory. While CCD's collect energy within the visible spectrum of light, the targets of scientific interest may well have no appreciable visible emissions. Image motion has the effect of smearing the image of the star in the direction of motion during a particular sampling interval. The presence of image motion is incorporated into a Kalman filter for the system, and it is shown that the addition of a gyro command term is adequate to compensate for the effect of image motion in the measurement. The updated gyro model is included in this analysis, but has natural frequencies faster than the projected star tracker sample rate for dim stars. The system state equations are reduced by modelling gyro drift as a white noise process. There exists a tradeoff in selected star tracker sample time between the CCD, which has improved noise characteristics as sample time increases, and the gyro, which will potentially drift further between long attitude updates. A sample time which minimizes pointing estimation error exists for the random drift gyro model as well as for a random walk gyro model.

  8. High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)

    2001-01-01

    A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.

  9. Multimodal imaging system for dental caries detection

    NASA Astrophysics Data System (ADS)

    Liang, Rongguang; Wong, Victor; Marcus, Michael; Burns, Peter; McLaughlin, Paul

    2007-02-01

    Dental caries is a disease in which minerals of the tooth are dissolved by surrounding bacterial plaques. A caries process present for some time may result in a caries lesion. However, if it is detected early enough, the dentist and dental professionals can implement measures to reverse and control caries. Several optical, nonionized methods have been investigated and used to detect dental caries in early stages. However, there is not a method that can singly detect the caries process with both high sensitivity and high specificity. In this paper, we present a multimodal imaging system that combines visible reflectance, fluorescence, and Optical Coherence Tomography (OCT) imaging. This imaging system is designed to obtain one or more two-dimensional images of the tooth (reflectance and fluorescence images) and a three-dimensional OCT image providing depth and size information of the caries. The combination of two- and three-dimensional images of the tooth has the potential for highly sensitive and specific detection of dental caries.

  10. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    PubMed Central

    Yan, Susu; Bowsher, James; Tough, MengHeng; Cheng, Lin; Yin, Fang-Fang

    2014-01-01

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT PhantomTM), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator or a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction. PMID:25370663

  11. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu, E-mail: susu.yan@duke.edu; Tough, MengHeng; Bowsher, James

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT Phantom{sup TM}), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator ormore » a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.« less

  12. Lunar Resources Using Moderate Spectral Resolution Visible and Near-infrared Spectroscopy: Al/si and Soil Maturity

    NASA Technical Reports Server (NTRS)

    Fischer, Erich M.; Pieters, Carle M.; Head, James W.

    1992-01-01

    Modern visible and near-infrared detectors are critically important for the accurate identification and relative abundance measurement of lunar minerals; however, even a very small number of well-placed visible and near-infrared bandpass channels provide a significant amount of general information about crucial lunar resources. The Galileo Solid State Imaging system (SSI) multispectral data are an important example of this. Al/Si and soil maturity will be discussed as examples of significant general lunar resource information that can be gleaned from moderate spectral resolution visible and near-infrared data with relative ease. Because quantitative-albedo data are necessary for these kinds of analyses, data such as those obtained by Galileo SSI are critical. SSI obtained synoptic digital multispectral image data for both the nearside and farside of the Moon during the first Galileo Earth-Moon encounter in December 1990. The data consist of images through seven filters with bandpasses ranging from 0.40 microns in the ultraviolet to 0.99 microns in the near-infrared. Although these data are of moderate spectral resolution, they still provide information for the following lunar resources: (1) titanium content of mature mare soils based upon the 0.40/0.56-micron (UV/VIS) ratio; (2) mafic mineral abundance based upon the 0.76/0.99-micron ratio; and (3) the maturity or exposure age of the soils based upon the 0.56-0.76-micron continuum and the 0.76/0.99-micron ratio. Within constraints, these moderate spectral resolution visible and near-infrared reflectance data can also provide elemental information such as Al/Si for mature highland soils.

  13. Automatic Detection of Diseased Tomato Plants Using Thermal and Stereo Visible Light Images

    PubMed Central

    Raza, Shan-e-Ahmed; Prince, Gillian; Clarkson, John P.; Rajpoot, Nasir M.

    2015-01-01

    Accurate and timely detection of plant diseases can help mitigate the worldwide losses experienced by the horticulture and agriculture industries each year. Thermal imaging provides a fast and non-destructive way of scanning plants for diseased regions and has been used by various researchers to study the effect of disease on the thermal profile of a plant. However, thermal image of a plant affected by disease has been known to be affected by environmental conditions which include leaf angles and depth of the canopy areas accessible to the thermal imaging camera. In this paper, we combine thermal and visible light image data with depth information and develop a machine learning system to remotely detect plants infected with the tomato powdery mildew fungus Oidium neolycopersici. We extract a novel feature set from the image data using local and global statistics and show that by combining these with the depth information, we can considerably improve the accuracy of detection of the diseased plants. In addition, we show that our novel feature set is capable of identifying plants which were not originally inoculated with the fungus at the start of the experiment but which subsequently developed disease through natural transmission. PMID:25861025

  14. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  15. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  16. Confocal non-line-of-sight imaging based on the light-cone transform

    NASA Astrophysics Data System (ADS)

    O’Toole, Matthew; Lindell, David B.; Wetzstein, Gordon

    2018-03-01

    How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

  17. Confocal non-line-of-sight imaging based on the light-cone transform.

    PubMed

    O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon

    2018-03-15

    How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

  18. Measurement accuracy and perceived quality of imaging systems for the evaluation of periodontal structures.

    PubMed

    Baksi, B Güniz

    2008-07-01

    The aim of this study was to compare the subjective diagnostic quality of F-speed film images and original and enhanced storage phosphor plate (SPP) digital images for the visualization of periodontal ligament space (PLS) and periapical (PB) and alveolar crestal bone (CB) and to assess the accuracy of these image modalities for the measurement of alveolar bone levels. Standardized images of six dried mandibles were obtained with film and Digora SPPs. Six evaluators rated the visibility of anatomical structures using a three-point scale. Alveolar bone levels were measured from the coronal-most tip of the marginal bone to a reference point. Results were compared by using Friedman and Wilcoxon signed-ranks tests. The kappa (kappa) statistic was used to measure agreement among observers. The measurements were compared using repeated measures analysis of variance and Bonferroni tests (P = 0.05). A paired t test was used for comparison with true bone levels (P = 0.05). Enhanced SPP images were rated superior, followed by film and then the original SPP images, for the evaluation of anatomical structures. The value of kappa rose from fair to substantial after the enhancement of the SPP images. Film and enhanced SPP images provided alveolar bone lengths close to the true bone lengths. Enhancement of digital images provided better visibility and resulted in comparable accuracy to film images for the evaluation of periodontal structures.

  19. Sparse aperture 3D passive image sensing and recognition

    NASA Astrophysics Data System (ADS)

    Daneshpanah, Mehdi

    The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results suggest that one might not need to venture into exotic and expensive detector arrays and associated optics for sensing room-temperature thermal objects in complete darkness.

  20. Remote sensing of snow and ice

    NASA Technical Reports Server (NTRS)

    Rango, A.

    1979-01-01

    This paper reviews remote sensing of snow and ice, techniques for improved monitoring, and incorporation of the new data into forecasting and management systems. The snowcover interpretation of visible and infrared data from satellites, automated digital methods, radiative transfer modeling to calculate the solar reflectance of snow, and models using snowcover input data and elevation zones for calculating snowmelt are discussed. The use of visible and near infrared techniques for inferring snow properties, microwave monitoring of snowpack characteristics, use of Landsat images for collecting glacier data, monitoring of river ice with visible imagery from NOAA satellites, use of sequential imagery for tracking ice flow movement, and microwave studies of sea ice are described. Applications of snow and ice research to commercial use are examined, and it is concluded that a major problem to be solved is characterization of snow and ice in nature, since assigning of the correct properties to a real system to be modeled has been difficult.

  1. Application of point-diffraction interferometry to testing infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Smartt, Raymond N.; Paez, Gonzalo

    2004-11-01

    Point-diffraction interferometry has found wide applications spanning much of the electromagnetic spectrum, including both near- and far-infrared wavelengths. Any telescopic, spectroscopic or other imaging system that converts an incident plane or spherical wavefront into an accessible point-like image can be tested at an intermediate image plane or at the principal image plane, in situ. Angular field performance can be similarly tested with inclined incident wavefronts. Any spatially coherent source can be used, but because of the available flux, it is most convenient to use a laser source. The simplicity of the test setup can allow testing of even large and complex fully-assembled systems. While purely reflective IR systems can be conveniently tested at visible wavelengths (apart from filters), catadioptric systems could be evaluated using an appropriate source and an IRPDI, with an imaging and recording system. PDI operating principles are briefly reviewed, and some more recent developments and interesting applications briefly discussed. Alternative approaches and recommended procedures for testing IR imaging systems, including the thermal IR, are suggested. An example of applying point-diffraction interferometry to testing a relatively low angular-resolution, optically complex IR telescopic system is presented.

  2. Development of 3D ultrasound needle guidance for high-dose-rate interstitial brachytherapy of gynaecological cancers

    NASA Astrophysics Data System (ADS)

    Rodgers, J.; Tessier, D.; D'Souza, D.; Leung, E.; Hajdok, G.; Fenster, A.

    2016-04-01

    High-dose-rate (HDR) interstitial brachytherapy is often included in standard-of-care for gynaecological cancers. Needles are currently inserted through a perineal template without any standard real-time imaging modality to assist needle guidance, causing physicians to rely on pre-operative imaging, clinical examination, and experience. While two-dimensional (2D) ultrasound (US) is sometimes used for real-time guidance, visualization of needle placement and depth is difficult and subject to variability and inaccuracy in 2D images. The close proximity to critical organs, in particular the rectum and bladder, can lead to serious complications. We have developed a three-dimensional (3D) transrectal US system and are investigating its use for intra-operative visualization of needle positions used in HDR gynaecological brachytherapy. As a proof-of-concept, four patients were imaged with post-insertion 3D US and x-ray CT. Using software developed in our laboratory, manual rigid registration of the two modalities was performed based on the perineal template's vaginal cylinder. The needle tip and a second point along the needle path were identified for each needle visible in US. The difference between modalities in the needle trajectory and needle tip position was calculated for each identified needle. For the 60 needles placed, the mean trajectory difference was 3.23 +/- 1.65° across the 53 visible needle paths and the mean difference in needle tip position was 3.89 +/- 1.92 mm across the 48 visible needles tips. Based on the preliminary results, 3D transrectal US shows potential for the development of a 3D US-based needle guidance system for interstitial gynaecological brachytherapy.

  3. The use of near-infrared photography to image fired bullets and cartridge cases.

    PubMed

    Stein, Darrell; Yu, Jorn Chi Chung

    2013-09-01

    An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.

  4. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  5. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  6. Sea surface velocities from visible and infrared multispectral atmospheric mapping sensor imagery

    NASA Technical Reports Server (NTRS)

    Pope, P. A.; Emery, W. J.; Radebaugh, M.

    1992-01-01

    High resolution (100 m), sequential Multispectral Atmospheric Mapping Sensor (MAMS) images were used in a study to calculate advective surface velocities using the Maximum Cross Correlation (MCC) technique. Radiance and brightness temperature gradient magnitude images were formed from visible (0.48 microns) and infrared (11.12 microns) image pairs, respectively, of Chandeleur Sound, which is a shallow body of water northeast of the Mississippi delta, at 145546 GMT and 170701 GMT on 30 Mar. 1989. The gradient magnitude images enhanced the surface water feature boundaries, and a lower cutoff on the gradient magnitudes calculated allowed the undesirable sunglare and backscatter gradients in the visible images, and the water vapor absorption gradients in the infrared images, to be reduced in strength. Requiring high (greater than 0.4) maximum cross correlation coefficients and spatial coherence of the vector field aided in the selection of an optimal template size of 10 x 10 pixels (first image) and search limit of 20 pixels (second image) to use in the MCC technique. Use of these optimum input parameters to the MCC algorithm, and high correlation and spatial coherence filtering of the resulting velocity field from the MCC calculation yielded a clustered velocity distribution over the visible and infrared gradient images. The velocity field calculated from the visible gradient image pair agreed well with a subjective analysis of the motion, but the velocity field from the infrared gradient image pair did not. This was attributed to the changing shapes of the gradient features, their nonuniqueness, and large displacements relative to the mean distance between them. These problems implied a lower repeat time for the imagery was needed in order to improve the velocity field derived from gradient imagery. Suggestions are given for optimizing the repeat time of sequential imagery when using the MCC method for motion studies. Applying the MCC method to the infrared brightness temperature imagery yielded a velocity field which did agree with the subjective analysis of the motion and that derived from the visible gradient imagery. Differences between the visible and infrared derived velocities were 14.9 cm/s in speed and 56.7 degrees in direction. Both of these velocity fields also agreed well with the motion expected from considerations of the ocean bottom topography and wind and tidal forcing in the study area during the 2.175 hour time interval.

  7. Sensor performance and weather effects modeling for intelligent transportation systems (ITS) applications

    NASA Astrophysics Data System (ADS)

    Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.

    1995-01-01

    Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.

  8. Endoscopic near-infrared dental imaging with indocyanine green: a pilot study.

    PubMed

    Li, Zhongqiang; Yao, Shaomian; Xu, Jian; Wu, Ye; Li, Chunhong; He, Ziying

    2018-06-01

    Current dental diagnosis, especially tooth abnormalities, relies largely on X-ray-based imaging, a technique that requires specialized skills and suffers from ionizing radiation. Here, we present a pilot study in rats of an efficient, ionizing-radiation-free and easy-to-use alternative for dental imaging. Postnatal rats at different ages were injected with indocyanine green and molars were imaged by a laboratory-designed endoscopic near-infrared (NIR) dental imaging system. The results indicate that the endoscopic NIR dental imaging can be used to observe the morphology of postnatal rat molars, especially at early postnatal stages when morphology of the molar is indistinguishable under visible conditions. A small abnormal cusp was observed and distinguished from the normal cusps by the NIR dental imaging system. Dental structures, such as unerupted molars, can be imaged as soon as 10 min after the injection of indocyanine green; imaging after 24 h shows improved imaging contrast. Overall, the endoscopic NIR fluorescence dental imaging system described here may be useful in dental research; this technique may serve as a safe, real-time imaging tool for dental diagnosis and treatment beyond experimental systems in the future. © 2018 New York Academy of Sciences.

  9. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  10. Optical imaging of hemoglobin oxygen saturation using a small number of spectral images for endoscopic application.

    PubMed

    Saito, Takaaki; Yamaguchi, Hiroshi

    2015-01-01

    Tissue hypoxia is associated with tumor and inflammatory diseases, and detection of hypoxia is potentially useful for their detailed diagnosis. An endoscope system that can optically observe hemoglobin oxygen saturation (StO2) would enable minimally invasive, real-time detection of lesion hypoxia in vivo. Currently, point measurement of tissue StO2 via endoscopy is possible using the commercial fiber-optic oximeter T-Stat, which is based on visible light spectroscopy at many wavelengths. For clinical use, however, imaging of StO2 is desirable to assess the distribution of tissue oxygenation around a lesion. Here, we describe our StO2 imaging technique based on a small number of wavelength ranges in the visible range. By assuming a homogeneous tissue, we demonstrated that tissue StO2 can be obtained independently from the scattering property and blood concentration of tissue using four spectral bands. We developed a prototype endoscope system and used it to observe tissue-simulating phantoms. The StO2 (%) values obtained using our technique agreed with those from the T-Stat within 10%. We also showed that tissue StO2 can be derived using three spectral band if the scattering property is fixed at preliminarily measured values.

  11. Simultaneous three wavelength imaging with a scanning laser ophthalmoscope.

    PubMed

    Reinholz, F; Ashman, R A; Eikelboom, R H

    1999-11-01

    Various imaging properties of scanning laser ophthalmoscopes (SLO) such as contrast or depth discrimination, are superior to those of the traditional photographic fundus camera. However, most SLO are monochromatic whereas photographic systems produce colour images, which inherently contain information over a broad wavelength range. An SLO system has been modified to allow simultaneous three channel imaging. Laser light sources in the visible and infrared spectrum were concurrently launched into the system. Using different wavelength triads, digital fundus images were acquired at high frame rates. Favourable wavelengths combinations were established and high contrast, true (red, green, blue) or false (red, green, infrared) colour images of the retina were recorded. The monochromatic frames which form the colour image exhibit improved distinctness of different retinal structures such as the nerve fibre layer, the blood vessels, and the choroid. A multi-channel SLO combines the advantageous imaging properties of a tunable, monochrome SLO with the benefits and convenience of colour ophthalmoscopy. The options to modify parameters such as wavelength, intensity, gain, beam profile, aperture sizes, independently for every channel assign a high degree of versatility to the system. Copyright 1999 Wiley-Liss, Inc.

  12. SU-E-I-23: Design and Clinical Application of External Marking Body in Multi- Mode Medical Images Registration and Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z; Gong, G

    2014-06-01

    Purpose: To design an external marking body (EMB) that could be visible on computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images and to investigate the use of the EMB for multiple medical images registration and fusion in the clinic. Methods: We generated a solution containing paramagnetic metal ions and iodide ions (CT'MR dual-visible solution) that could be viewed on CT and MR images and multi-mode image visible solution (MIVS) that could be obtained by mixing radioactive nuclear material. A globular plastic theca (diameter: 3–6 mm) that mothball the MIVS and themore » EMB was brought by filling MIVS. The EMBs were fixed on the patient surface and CT, MR, PET and SPECT scans were obtained. The feasibility of clinical application and the display and registration error of EMB among different image modalities were investigated. Results: The dual-visible solution was highly dense on CT images (HU>700). A high signal was also found in all MR scanning (T1, T2, STIR and FLAIR) images, and the signal was higher than subcutaneous fat. EMB with radioactive nuclear material caused a radionuclide concentration area on PET and SPECT images, and the signal of EMB was similar to or higher than tumor signals. The theca with MIVS was clearly visible on all the images without artifact, and the shape was round or oval with a sharp edge. The maximum diameter display error was 0.3 ± 0.2mm on CT and MRI images, and 1.0 ± 0.3mm on PET and SPECT images. In addition, the registration accuracy of the theca center among multi-mode images was less than 1mm. Conclusion: The application of EMB with MIVS improves the registration and fusion accuracy of multi-mode medical images. Furthermore, it has the potential to ameliorate disease diagnosis and treatment outcome.« less

  13. [Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.

    PubMed

    Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning

    2016-05-01

    Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.

  14. Mirage: a visible signature evaluation tool

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.; Meehan, Alaster J.; Shao, Q. T.; Richards, Noel

    2017-10-01

    This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).

  15. A Probability-Based Algorithm Using Image Sensors to Track the LED in a Vehicle Visible Light Communication System.

    PubMed

    Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik

    2017-02-10

    This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.

  16. Multiple Aspects of the Southern California Wildfires as Seen by NASA's AVIRIS

    NASA Image and Video Library

    2017-12-15

    NASA's Airborne Visible Infrared Imaging Spectrometer instrument (AVIRIS), flying aboard a NASA Armstrong Flight Research Center high-altitude ER-2 aircraft, observed wildfires burning in Southern California on Dec. 5-7, 2017. AVIRIS is an imaging spectrometer that observes light in visible and infrared wavelengths, measuring the full spectrum of radiated energy. Unlike regular cameras with three colors, AVIRIS has 224 spectral channels, measuring contiguously from the visible through the shortwave infrared. Data from these flights, compared against measurements acquired earlier in the year, show many ways this one instrument can improve both our understanding of fire risk and the response to fires in progress. The top row in this image compilation shows pre-fire data acquired from June 2017. At top left is a visible-wavelength image similar to what our own eyes would see. The top middle image is a map of surface composition based on analyzing the full electromagnetic spectrum, revealing green vegetated areas and non-photosynthetic vegetation that is potential fuel as well as non-vegetated surfaces that may slow an advancing fire. The image at top right is a remote measurement of the water in tree canopies, a proxy for how much moisture is in the vegetation. The bottom row in the compilation shows data acquired from the Thomas fire in progress in December 2017. At bottom left is a visible wavelength image. The bottom middle image is an infrared image, with red at 2,250 nanometers showing fire energy, green at 1,650 nanometers showing the surface through the smoke, and blue at 1,000 nanometers showing the smoke itself. The image at bottom right is a fire temperature map using spectroscopic analysis to measure fire thermal emission recorded in the AVIRIS spectra. https://photojournal.jpl.nasa.gov/catalog/PIA22194

  17. Anatomic Pathways of Peripancreatic Fluid Draining to Mediastinum in Recurrent Acute Pancreatitis: Visible Human Project and CT Study

    PubMed Central

    Xu, Haotong; Zhang, Xiaoming; Christe, Andreas; Ebner, Lukas; Zhang, Shaoxiang; Luo, Zhulin; Wu, Yi; Li, Yin; Tian, Fuzhou

    2013-01-01

    Background In past reports, researchers have seldom attached importance to achievements in transforming digital anatomy to radiological diagnosis. However, investigators have been able to illustrate communication relationships in the retroperitoneal space by drawing potential routes in computerized tomography (CT) images or a virtual anatomical atlas. We established a new imaging anatomy research method for comparisons of the communication relationships of the retroperitoneal space in combination with the Visible Human Project and CT images. Specifically, the anatomic pathways of peripancreatic fluid extension to the mediastinum that may potentially transform into fistulas were studied. Methods We explored potential pathways to the mediastinum based on American and Chinese Visible Human Project datasets. These drainage pathways to the mediastinum were confirmed or corrected in CT images of 51 patients with recurrent acute pancreatitis in 2011. We also investigated whether additional routes to the mediastinum were displayed in CT images that were not in Visible Human Project images. Principal Findings All hypothesized routes to the mediastinum displayed in Visible Human Project images, except for routes from the retromesenteric plane to the bilateral retrorenal plane across the bilateral fascial trifurcation and further to the retrocrural space via the aortic hiatus, were confirmed in CT images. In addition, route 13 via the narrow space between the left costal and crural diaphragm into the retrocrural space was demonstrated for the first time in CT images. Conclusion This type of exploration model related to imaging anatomy may be used to support research on the communication relationships of abdominal spaces, mediastinal spaces, cervical fascial spaces and other areas of the body. PMID:23614005

  18. Making Heat Visible: Promoting Energy Conservation Behaviors Through Thermal Imaging.

    PubMed

    Goodhew, Julie; Pahl, Sabine; Auburn, Tim; Goodhew, Steve

    2015-12-01

    Householders play a role in energy conservation through the decisions they make about purchases and installations such as insulation, and through their habitual behavior. The present U.K. study investigated the effect of thermal imaging technology on energy conservation, by measuring the behavioral effect after householders viewed images of heat escaping from or cold air entering their homes. In Study 1 ( n = 43), householders who received a thermal image reduced their energy use at a 1-year follow-up, whereas householders who received a carbon footprint audit and a non-intervention control demonstrated no change. In Study 2 ( n = 87), householders were nearly 5 times more likely to install draught proofing measures after seeing a thermal image. The effect was especially pronounced for actions that addressed an issue visible in the images. Findings indicate that using thermal imaging to make heat loss visible can promote energy conservation.

  19. Real-time 3-D X-ray and gamma-ray viewer

    NASA Technical Reports Server (NTRS)

    Yin, L. I. (Inventor)

    1983-01-01

    A multi-pinhole aperture lead screen forms an equal plurality of invisible mini-images having dissimilar perspectives of an X-ray and gamma-ray emitting object (ABC) onto a near-earth phosphor layer. This layer provides visible light mini-images directly into a visible light image intensifier. A viewing screen having an equal number of dissimilar perspective apertures distributed across its face in a geometric pattern identical to the lead screen, provides a viewer with a real, pseudoscopic image (A'B'C') of the object with full horizontal and vertical parallax. Alternatively, a third screen identical to viewing screen and spaced apart from a second visible light image intensifier, may be positioned between the first image intensifier and the viewing screen, thereby providing the viewer with a virtual, orthoscopic image (A"B"C") of the object (ABC) with full horizontal and vertical parallax.

  20. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  1. High-resolution light microscopy of nanoforms

    NASA Astrophysics Data System (ADS)

    Vodyanoy, Vitaly; Pustovyy, Oleg; Vainrub, Arnold

    2007-09-01

    We developed a high resolution light imaging system. Diffraction gratings with 100 nm width lines as well as less than 100 nm size features of different-shaped objects are clearly visible on a calibrated microscope test slide (Vainrub et al., Optics Letters, 2006, 31, 2855). The two-point resolution increase results from a known narrowing of the central diffraction peak for the annular aperture. Better visibility and advanced contrast of the smallest features in the image are due to enhancement of high spatial frequencies in the optical transfer function. The imaging system is portable, low energy, and battery operated. It has been adapted to use in both transmitting and reflecting light. It is particularly applicable for motile nanoform systems where structure and functions can be depicted in real time. We have isolated micrometer and submicrometer particles, termed proteons, from human and animal blood. Proteons form by reversible seeded aggregation of proteins around proteon nucleating centers (PNCs). PNCs are comprised of 1-2nm metallic nanoclusters containing 40-300 atoms. Proteons are capable of spontaneous assembling into higher nanoform systems assuming structure of complicated topology. The arrangement of complex proteon system mimics the structure of a small biological cell. It has structures that imitate membrane and nucleolus or nuclei. Some of these nanoforms are motile. They interact and divide. Complex nanoform systems can spontaneously reduce to simple proteons. The physical properties of these nanoforms could shed some light on the properties of early life forms or forms at extreme conditions.

  2. Jupiter's Northern Hemisphere in a Methane Band (Time Set 2)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mosaic of Jupiter's northern hemisphere between 10 and 50 degrees latitude. Jupiter's atmospheric circulation is dominated by alternating eastward and westward jets from equatorial to polar latitudes. The direction and speed of these jets in part determine the color and texture of the clouds seen in this mosaic. Also visible are several other common Jovian cloud features, including large white ovals, bright spots, dark spots, interacting vortices, and turbulent chaotic systems. The north-south dimension of each of the two interacting vortices in the upper half of the mosaic is about 3500 kilometers. Light at 727 nanometers is moderately absorbed by atmospheric methane. This mosaic shows the features of Jupiter's main visible cloud deck and upper-tropospheric haze, with higher features enhanced in brightness over lower features.

    North is at the top. The images are projected on a sphere, with features being foreshortened towards the north. The smallest resolved features are tens of kilometers in size. These images were taken on April 3, 1997, at a range of 1.4 million kilometers by the Solid State Imaging system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  3. Artificial intelligence for geologic mapping with imaging spectrometers

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1993-01-01

    This project was a three year study at the Center for the Study of Earth from Space (CSES) within the Cooperative Institute for Research in Environmental Science (CIRES) at the University of Colorado, Boulder. The goal of this research was to develop an expert system to allow automated identification of geologic materials based on their spectral characteristics in imaging spectrometer data such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This requirement was dictated by the volume of data produced by imaging spectrometers, which prohibits manual analysis. The research described is based on the development of automated techniques for analysis of imaging spectrometer data that emulate the analytical processes used by a human observer. The research tested the feasibility of such an approach, implemented an operational system, and tested the validity of the results for selected imaging spectrometer data sets.

  4. Space radar image of New York City

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This radar image of the New York city metropolitan area. The island of Manhattan appears in the center of the image. The green-colored rectangle on Manhattan is Central Park. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/ X-SAR) aboard the space shuttle Endeavour on October 10, 1994. North is toward the upper right. The area shown is 75.0 kilometers by 48.8 kilometers (46.5 miles by 30.2 miles). The image is centered at 40.7 degrees north latitude and 73.8 degrees west longitude. In general, light blue areas correspond to dense urban development, green areas to moderately vegetated zones and black areas to bodies of water. The Hudson River is the black strip that runs from the left edge to the upper right corner of the image. It separates New Jersey, in the upper left of the image, from New York. The Atlantic Ocean is at the bottom of the image where two barrier islands along the southern shore of Long Island are also visible. John F. Kennedy International Airport is visible above these islands. Long Island Sound, separating Long Island from Connecticut, is the dark area right of the center of the image. Many bridges are visible in the image, including the Verrazano Narrows, George Washington and Brooklyn bridges. The radar illumination is from the left of the image; this causes some urban zones to appear red because the streets are at a perpendicular angle to the radar pulse. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted, vertically received); blue represents the C-band (horizontally transmitted, vertically received). Radar images like this one could be used as a tool for city planners and resource managers to map and monitor land use patterns. The radar imaging systems can clearly detect the variety of landscapes in the area, as well as the density of urban development.

  5. A two layer chaotic encryption scheme of secure image transmission for DCT precoded OFDM-VLC transmission

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao

    2018-03-01

    In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.

  6. Double-image storage optimized by cross-phase modulation in a cold atomic system

    NASA Astrophysics Data System (ADS)

    Qiu, Tianhui; Xie, Min

    2017-09-01

    A tripod-type cold atomic system driven by double-probe fields and a coupling field is explored to store double images based on the electromagnetically induced transparency (EIT). During the storage time, an intensity-dependent signal field is applied further to extend the system with the fifth level involved, then the cross-phase modulation is introduced for coherently manipulating the stored images. Both analytical analysis and numerical simulation clearly demonstrate a tunable phase shift with low nonlinear absorption can be imprinted on the stored images, which effectively can improve the visibility of the reconstructed images. The phase shift and the energy retrieving rate of the probe fields are immune to the coupling intensity and the atomic optical density. The proposed scheme can easily be extended to the simultaneous storage of multiple images. This work may be exploited toward the end of EIT-based multiple-image storage devices for all-optical classical and quantum information processings.

  7. Cartographic potential of SPOT image data

    NASA Technical Reports Server (NTRS)

    Welch, R.

    1985-01-01

    In late 1985, the SPOT (Systeme Probatoire d'Observation de la Terre) satellite is to be launched by the Ariane rocket from French Guiana. This satellite will have two High Resolution Visible (HRV) line array sensor systems which are capable of providing monoscopic and stereoscopic coverage of the earth. Cartographic applications are related to the recording of stereo image data and the acquisition of 20-m data in a multispectral mode. One of the objectives of this study involves a comparison of the suitability of SPOT and TM image data for mapping urban land use/cover. Another objective is concerned with a preliminary assessment of the potential of SPOT image data for map revision when merged with conventional map sheets converted to raster formats.

  8. Absorption-enhanced imaging through scattering media using carbon black nano-particles: from visible to near infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Tanzid, Mehbuba; Hogan, Nathaniel J.; Robatjazi, Hossein; Veeraraghavan, Ashok; Halas, Naomi J.

    2018-05-01

    Imaging through scattering media can be improved with the addition of absorbers, since multiply-scattered photons, with their longer path length, are absorbed with a higher probability than ballistic photons. The image resolution enhancement is substantially greater when imaging through isotropic scatterers than when imaging through an ensemble of strongly forward-scattering particles. However, since the angular scattering distribution is determined by the size of the scatterers with respect to the wavelength of incident light, particles that are forward scatterers at visible wavelengths can be isotropic scatterers at infrared (IR) wavelengths. Here, we show that substantial image resolution enhancement can be achieved in the near-infrared wavelength regime for particles that are forward scattering at visible wavelengths using carbon black nanoparticles as a broadband absorber. This observation provides a new strategy for image enhancement through scattering media: by selecting the appropriate wavelength range for imaging, in this case the near-IR, the addition of absorbers more effectively enhances the image resolution.

  9. Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yingchuan; College of Mathematics and Physics, University of South China, Hengyang 421001; Kuang Leman

    2011-05-15

    In this paper, we propose a theoretical scheme of many-ghost imaging in terms of Nth-order correlated thermal light. We obtain the Gaussian thin lens equations in the many-ghost imaging protocol. We show that it is possible to produce N-1 ghost images of an object at different places in a nonlocal fashion by means of a higher order correlated imaging process with an Nth-order correlated thermal source and correlation measurements. We investigate the visibility of the ghost images in the scheme and obtain the upper bounds of the visibility for the Nth-order correlated thermal-light ghost imaging. It is found that themore » visibility of the ghost images can be dramatically enhanced when the order of correlation becomes larger. It is pointed out that the many-ghost imaging phenomenon is an observable physical effect induced by higher order coherence or higher order correlations of optical fields.« less

  10. An imaging system based on laser optical feedback for fog vision applications

    NASA Astrophysics Data System (ADS)

    Belin, E.; Boucher, V.

    2008-08-01

    The Laboratoire Régional des Ponts et Chaussées d'Angers - LRPC of Angers is currently studying the feasability of applying an optical technique based on the principle of the laser optical feedback to long distance fog vision. Optical feedback set up allows the creation of images on roadsigns. To create artificial fog conditions we used a vibrating cell that produces a micro-spray of water according to the principle of acoustic cavitation. To scale the sensitivity of the system under duplicatible conditions we also used optical densities linked to first-sight visibility distances. The current system produces, in a few seconds, 200 × 200 pixel images of a roadsign seen through dense artificial fog.

  11. A high resolution IR/visible imaging system for the W7-X limiter

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.

    2016-11-01

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.

  12. Image analysis with the computer vision system and the consumer test in evaluating the appearance of Lucanian dry sausage.

    PubMed

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Di Bello, Gerardo; Braghieri, Ada

    2014-01-01

    The object of the investigation was the Lucanian dry sausage appearance, meant as color and visible fat ratio. The study was carried out on dry sausages produced in 10 different salami factories and seasoned for 18 days on average. We studied the effect of the raw material origin (5 producers used meat bought from the market and other 5 producers used meat from pigs bred in their farms) and of the salami factories or brands on meat color, fat color and visible fat ratio in dry sausages. The sausages slices were photographed and the images were analysed with the computer vision system to measure the changes in the colorimetric characteristics L*, a*, b*, hue and chroma and in the visible fat area ratio. The last parameter was assessed on the slice surface using image binarization. A consumer test was conducted to determine the relationship between the perception of visible fat on the sausage slice surface and acceptability and preference of this product. The consumers were asked to look carefully at the 6 sausages slices in a photo, minding the presence of fat, and to identify (a) the slices they considered unacceptable for consumption and (b) the slice they preferred. The results show that the color of the sausage lean part varies in relation to the raw material employed and to the producer or brand (P<0.001). Besides, the sausage meat color is not uniform in some salami factories (P<0.05-0.001). In all salami factories the sausages show a high uniformity in fat color. The visible fat ratio of the sausages slices is higher (P<0.001) in the product from salami factories without pig-breeding farm. The fat percentage is highly variable (P<0.001) among the sausages of each salami factory. On the whole, the product the consumers consider acceptable and is inclined to eat has a low fat percentage (P<0.001). Our consumers (about 70%) prefer slices which are leaner (P<0.001). Women, in particular, show a higher preference for the leanest (P<0.001). © 2013.

  13. A high resolution IR/visible imaging system for the W7-X limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less

  14. CRISM's First 'Targeted' Observation of Mars

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This shows the first site on Mars imaged by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) using its full-resolution hyperspectral capability, with a 'targeted image.'

    During a targeted image, CRISM's movable gimbal tracks a point on the surface, and slowly scans across it for about three minutes. The image is built up one line at a time, and each pixel in the image is measured in 544 colors covering 0.36-3.92 micrometers. During this time the Mars Reconnaissance Orbiter's range to the target starts at about 410 kilometers (250 miles), decreases to about 290 kilometers (190 miles) when the spacecraft makes its closest approach, and increases again to 410 kilometers at the end of the image. The change in geometry during image acquisition gives each CRISM targeted image a characteristic hourglass shape.

    This first targeted image was acquired at 1515 UTC (11:15 a.m. EDT) on Sept. 29, 2006, near 7.7 degrees south latitude, 270.5 degrees east longitude. Only minimal processing and map projection of the data have been done. At the center of the image the spatial resolution is as good as 18 meters (60 feet) per pixel. The three wavelengths shown here provide an approximate true color representation. The hourglass-shaped image covers an area about 13 kilometers (8 miles) north-south and, at the narrowest point, about 9 kilometers (5.6 miles) east-west. The upper left panel shows the image's regional context, on a mosaic from the Mars Odyssey spacecraft's Thermal Emission Imaging System (THEMIS) taken in infrared frequencies. This western part of the Valles Marineris canyon system is called Ius Chasma. The canyon system is about five kilometers (about three miles) deep and exposes ancient rocks from deep in the crust. The lower left panel shows local context, using a THEMIS visible-wavelengths image (THEMIS-VIS), which is comparable in resolution to CRISM data. Outcrops of light-toned layered rocks 1-2 kilometers (0.6-1.2 miles) across are set on ] a background of deeply eroded canyon floor, and sand dunes cover part of the site. The map-projected CRISM image, at right, shows that the site has bland color properties at visible wavelengths, and is mostly reddened by Mars' pervasive dust or by weathering products. Faint color banding is visible in the layered rocks, hinting at compositional differences between the layers.

    CRISM's mission: Find the spectral fingerprints of aqueous and hydrothermal deposits and map the geology, composition and stratigraphy of surface features. The instrument will also watch the seasonal variations in Martian dust and ice aerosols, and water content in surface materials -- leading to new understanding of the climate.

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad.

  15. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  16. Can the RUVIS reflected UV imaging system visualize fingerprint corrosion on brass cartridge casings postfiring?

    PubMed

    Leintz, Rachel; Bond, John W

    2013-05-01

    Comparisons are made between the visualization of fingerprint corrosion ridge detail on fired brass cartridge casings, where fingerprint sweat was deposited prefiring, using both ultraviolet (UV) and visible (natural daylight) light sources. A reflected ultraviolet imaging system (RUVIS), normally used for visualizing latent fingerprint sweat deposits, is compared with optical interference and digital color mapping of visible light, the latter using apparatus constructed to easily enable selection of the optimum viewing angle. Results show that reflected UV, with a monochromatic UV source of 254 nm, was unable to visualize fingerprint ridge detail on any of 12 casings analyzed, whereas optical interference and digital color mapping using natural daylight yielded ridge detail on three casings. Reasons for the lack of success with RUVIS are discussed in terms of the variation in thickness of the thin film of metal oxide corrosion and absorption wavelengths for the corrosion products of brass. © 2013 American Academy of Forensic Sciences.

  17. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  18. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutler, Joshua; Cole, Jr., Edward I.; Smith, Norman F.

    This project investigated a recently patented Sandia technology known as visible light Laser Voltage Probing (LVP). In this effort we carefully prepared well understood and characterized samples for testing. These samples were then operated across a range of configurations to minimize the possibility of superposition of multiple photon carrier interactions as data was taken with conventional and visible light LVP systems. Data consisted of LVP waveforms and Laser Voltage Images (LVI). Visible light (633 nm) LVP data was compared against 1319 nm and 1064 nm conventional LVP data to better understand the similarities and differences in mechanisms for all wavelengthsmore » of light investigated. The full text can be obtained by reaching the project manager, Ed Cole or the Cyber IA lead, Justin Ford.« less

  20. NASA Sees Severe Weather from Central to Eastern US

    NASA Image and Video Library

    2017-12-08

    Suomi NPP capture this true-color image of the storms over the Midwest and US South on April 30, 2017. This images comes from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument on @NASA.NPP Credit: NASA/NOAA/NPP/VIIRS NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Iodine filter imaging system for subtraction angiography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.

    1993-11-01

    A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.

  2. A New Tool for Quality Control

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Diffracto, Ltd. is now offering a new product inspection system that allows detection of minute flaws previously difficult or impossible to observe. Called D-Sight, it represents a revolutionary technique for inspection of flat or curved surfaces to find such imperfections as dings, dents and waviness. System amplifies defects, making them highly visible to simplify decision making as to corrective measures or to identify areas that need further study. CVA 3000 employs a camera, high intensity lamps and a special reflective screen to produce a D- Sight image of light reflected from a surface. Image is captured and stored in a computerized vision system then analyzed by a computer program. A live image of surface is projected onto a video display and compared with a stored master image to identify imperfections. Localized defects measuring less than 1/1000 of an inch are readily detected.

  3. Radiation-Responsive Esculin-Derived Molecular Gels as Signal Enhancers for Optical Imaging.

    PubMed

    Silverman, Julian R; Zhang, Qize; Pramanik, Nabendu B; Samateh, Malick; Shaffer, Travis M; Sagiri, Sai Sateesh; Grimm, Jan; John, George

    2017-12-13

    Recent interest in detecting visible photons that emanate from interactions of ionizing radiation (IR) with matter has spurred the development of multifunctional materials that amplify the optical signal from radiotracers. Tailored stimuli-responsive systems may be paired with diagnostic radionuclides to improve surgical guidance and aid in detecting therapeutic radionuclides otherwise difficult to image with conventional nuclear medicine approaches. Because light emanating from these interactions is typically low in intensity and blue-weighted (i.e., greatly scattered and absorbed in vivo), it is imperative to increase or shift the photon flux for improved detection. To address this challenge, a gel that is both scintillating and fluorescent is used to enhance the optical photon output in image mapping for cancer imaging. Tailoring biobased materials to synthesize thixotropic thermoreversible hydrogels (a minimum gelation concentration of 0.12 wt %) offers image-aiding systems which are not only functional but also potentially economical, safe, and environmentally friendly. These robust gels (0.66 wt %, ∼900 Pa) respond predictably to different types of IRs including β- and γ-emitters, resulting in a doubling of the detectable photon flux from these emitters. The synthesis and formulation of such a gel are explored with a focus on its physicochemical and mechanical properties, before being utilized to enhance the visible photon flux from a panel of radionuclides as detected. The possibility of developing a topical cream of this gel makes this system an attractive potential alternative to current techniques, and the multifunctionality of the gelator may serve to inspire future next-generation materials.

  4. Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.

    2010-08-01

    Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less

  5. Review on short-wavelength infrared laser gated-viewing at Fraunhofer IOSB

    NASA Astrophysics Data System (ADS)

    Göhler, Benjamin; Lutzmann, Peter

    2017-03-01

    This paper reviews the work that has been done at Fraunhofer IOSB (and its predecessor institutes) in the past ten years in the area of laser gated-viewing (GV) in the short-wavelength infrared (SWIR) band. Experimental system demonstrators in various configurations have been built up to show the potential for different applications and to investigate specific topics. The wavelength of the pulsed illumination laser is 1.57 μm and lies in the invisible, retina-safe region allowing much higher pulse energies than for wavelengths in the visible or near-infrared band concerning eye safety. All systems built up, consist of gated Intevac LIVAR® cameras based on EBCCD/EBCMOS detectors sensitive in the SWIR band. This review comprises military and civilian applications in maritime and land domain-in particular vision enhancement in bad visibility, long-range applications, silhouette imaging, 3-D imaging by sliding gates and slope method, bistatic GV imaging, and looking through windows. In addition, theoretical studies that were conducted-e.g., estimating 3-D accuracy or modeling range performance-are presented. Finally, an outlook for future work in the area of SWIR laser GV at Fraunhofer IOSB is given.

  6. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  7. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Spectrophotometric Characterisation of the Trojan Asteroids (624) Hektor et (911) Agamemnon

    NASA Astrophysics Data System (ADS)

    Doressoundiram, A.; Bott, N.; Perna, D.

    2016-12-01

    We obtained spectrophotometric observations of (624) Hektor and (911) Agamemnon, two large Trojan asteroids in order to (1) better understand the composition of their surface by means of their visible and infrared spectra, and (2) eventually detect a possible weak cometary activity by means of their images in the visible. We had data at different rotational phases to probe surface variegations. We found that the visible and infrared spectra are very similar to each other. That indicates a relatively homogenous surface for the asteroids, but it does not exclude the presence of localized inhomogeneities. Computation of a high spectral slope confirmed their D-type asteroids classification. No aqueous alteration absorption band was found in the visible spectra of both studied Trojan asteroids. This can be interpreted in two differents ways: either no liquid water flowed on their surface, or the surface is covered with a crust that mask the presence of hydrated minerals. We use a radiative transfer model to investigate the surface composition of these icy and primitive outer solar system bodies. We suggest models composed of mixtures of organic compounds, minerals and lower limits for water ice. Lastly, the analysis of the images of both Trojan asteroids did not reveal any cometary activity.

  9. Hubble Spies Spiral Galaxy

    NASA Image and Video Library

    2017-12-08

    Spiral galaxy NGC 3274 is a relatively faint galaxy located over 20 million light-years away in the constellation of Leo (The Lion). This NASA/ESA Hubble Space Telescope image comes courtesy of Hubble's Wide Field Camera 3 (WFC3), whose multi-color vision allows astronomers to study a wide range of targets, from nearby star formation to galaxies in the most remote regions of the cosmos. This image combines observations gathered in five different filters, bringing together ultraviolet, visible and infrared light to show off NGC 3274 in all its glory. NGC 3274 was discovered by Wilhelm Herschel in 1783. The galaxy PGC 213714 is also visible on the upper right of the frame, located much farther away from Earth. Image Credit: ESA/Hubble & NASA, D. Calzetti NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, K; Nachabe, R; Racadio, J

    Purpose: To define an alternative to antiscatter grid (ASG) removal in angiographic systems which achieves similar patient dose reduction as ASG removal without degrading image quality during pediatric imaging. Methods: This study was approved by the local institution animal care and use committee (IACUC). Six different digital subtraction angiography settings were evaluated that altered the mAs, (100, 70, 50, 35, 25, 17.5% of reference mAs) with and without ASG. Three pigs of 5, 15, and 20 kg (9, 15, and 17 cm abdominal thickness; smaller than a newborn, average 3 yr old, and average 10 year old human abdomen respectively)more » were imaged using the six dose settings with and without ASG. Image quality was defined as the order of vessel branch that is visible relative to the injected vessel. Five interventional radiologists evaluated all images. Image quality and patient dose were statistically compared using analysis of variance and receiver operating curve (ROC) analysis to define the preferred dose level and use of ASG for a minimum visibility of 2nd or 3rd order branches of vessel visibility. Results: ASG grid removal reduces dose by 26% with reduced image quality. Only with the ASG present can 3rd order branches be visualized; 100% mAs is required for 9 cm pig while 70% mAs is adequate for the larger pigs. 2nd order branches can be visualized with ASG at 17.5% mAs for all three pig sizes. Without the ASG, 50%, 35% and 35% mAs is required for smallest to largest pig. Conclusion: Removing ASG reduces patient dose and image quality. Image quality can be improved with the ASG present while further reducing patient dose if an optimized radiographic technique is used. Rami Nachabe is an employee of Philips Health Care; Keith Strauss is a paid consultant of Philips Health Care.« less

  11. GRMHD Simulations of Visibility Amplitude Variability for Event Horizon Telescope Images of Sgr A*

    NASA Astrophysics Data System (ADS)

    Medeiros, Lia; Chan, Chi-kwan; Özel, Feryal; Psaltis, Dimitrios; Kim, Junhan; Marrone, Daniel P.; Sa¸dowski, Aleksander

    2018-04-01

    The Event Horizon Telescope will generate horizon scale images of the black hole in the center of the Milky Way, Sgr A*. Image reconstruction using interferometric visibilities rests on the assumption of a stationary image. We explore the limitations of this assumption using high-cadence disk- and jet-dominated GRMHD simulations of Sgr A*. We also employ analytic models that capture the basic characteristics of the images to understand the origin of the variability in the simulated visibility amplitudes. We find that, in all simulations, the visibility amplitudes for baselines oriented parallel and perpendicular to the spin axis of the black hole follow general trends that do not depend strongly on accretion-flow properties. This suggests that fitting Event Horizon Telescope observations with simple geometric models may lead to a reasonably accurate determination of the orientation of the black hole on the plane of the sky. However, in the disk-dominated models, the locations and depths of the minima in the visibility amplitudes are highly variable and are not related simply to the size of the black hole shadow. This suggests that using time-independent models to infer additional black hole parameters, such as the shadow size or the spin magnitude, will be severely affected by the variability of the accretion flow.

  12. 4D cone-beam CT imaging for guidance in radiation therapy: setup verification by use of implanted fiducial markers

    NASA Astrophysics Data System (ADS)

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C. C. M.; Bel, Arjan; Alderliesten, Tanja

    2016-03-01

    The use of 4D cone-beam computed tomography (CBCT) and fiducial markers for guidance during radiation therapy of mobile tumors is challenging due to the trade-off between image quality, imaging dose, and scanning time. We aimed to investigate the visibility of markers and the feasibility of marker-based 4D registration and manual respiration-induced marker motion quantification for different CBCT acquisition settings. A dynamic thorax phantom and a patient with implanted gold markers were included. For both the phantom and patient, the peak-to-peak amplitude of marker motion in the cranial-caudal direction ranged from 5.3 to 14.0 mm, which did not affect the marker visibility and the associated marker-based registration feasibility. While using a medium field of view (FOV) and the same total imaging dose as is applied for 3D CBCT scanning in our clinic, it was feasible to attain an improved marker visibility by reducing the imaging dose per projection and increasing the number of projection images. For a small FOV with a shorter rotation arc but similar total imaging dose, streak artifacts were reduced due to using a smaller sampling angle. Additionally, the use of a small FOV allowed reducing total imaging dose and scanning time (~2.5 min) without losing the marker visibility. In conclusion, by using 4D CBCT with identical or lower imaging dose and a reduced gantry speed, it is feasible to attain sufficient marker visibility for marker-based 4D setup verification. Moreover, regardless of the settings, manual marker motion quantification can achieve a high accuracy with the error <1.2 mm.

  13. Asphalted Road Temperature Variations Due to Wind Turbine Cast Shadows

    PubMed Central

    Arnay, Rafael; Acosta, Leopoldo; Sigut, Marta; Toledo, Jonay

    2009-01-01

    The contribution of this paper is a technique that in certain circumstances allows one to avoid the removal of dynamic shadows in the visible spectrum making use of images in the infrared spectrum. This technique emerged from a real problem concerning the autonomous navigation of a vehicle in a wind farm. In this environment, the dynamic shadows cast by the wind turbines' blades make it necessary to include a shadows removal stage in the preprocessing of the visible spectrum images in order to avoid the shadows being misclassified as obstacles. In the thermal images, dynamic shadows completely disappear, something that does not always occur in the visible spectrum, even when the preprocessing is executed. Thus, a fusion on thermal and visible bands is performed. PMID:22291541

  14. Scientific Rationale for the Canadian NGST Visible Imager

    NASA Astrophysics Data System (ADS)

    Drissen, L.; Hickson, P.; Hutchings, J.; Lilly, S.; Murowinski, R.; Stetson, P.

    1999-05-01

    While NGST will be optimized for observing in the infrared, it also offers tremendous scientific opportunities in the visible regime (0.5 - 1 mu m). This poster presents some of the science drivers for a visible imager on board NGST. Potential targets include: young starbursts and AGNs at high redshift (z=3-8); gravitational lensing by clusters of galaxies; white dwarfs in the Galactic halo and globular clusters; RR Lyrae stars in the M81 group; the lower end of the IMF in Local Group starburst clusters; low surface brightness galaxies; the environment of nearby (z<0.2) supernovae; and trans-neptunian objects. We also briefly describe the current status of the studies on the Canadian NGST Visible Imager, which is one of three instruments proposed as a Canadian contribution to NGST.

  15. Current instrument status of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

    NASA Technical Reports Server (NTRS)

    Eastwood, Michael L.; Sarture, Charles M.; Chrien, Thomas G.; Green, Robert O.; Porter, Wallace M.

    1991-01-01

    An upgraded version of AVIRIS, an airborne imaging spectrometer based on a whiskbroom-type scanner coupled via optical fibers to four dispersive spectrometers, that has been in operation since 1987 is described. Emphasis is placed on specific AVIRIS subsystems including foreoptics, fiber optics, and an in-flight reference source; spectrometers and detector dewars; a scan drive mechanism; a signal chain; digital electronics; a tape recorder; calibration systems; and ground support requirements.

  16. NASA High Contrast Imaging for Exoplanets

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2008-01-01

    Described is NASA's ongoing program for the detection and characterization of exosolar planets via high-contrast imaging. Some of the more promising proposed techniques under assessment may enable detection of life outside our solar system. In visible light terrestrial planets are approximately 10(exp -10) dimmer than the parent star. Issues such as diffraction, scatter, wavefront, amplitude and polarization all contribute to a reduction in contrast. An overview of the techniques will be discussed.

  17. Fully Integrated Optical Spectrometer in Visible and Near-IR in CMOS.

    PubMed

    Hong, Lingyu; Sengupta, Kaushik

    2017-12-01

    Optical spectrometry in the visible and near-infrared range has a wide range of applications in healthcare, sensing, imaging, and diagnostics. This paper presents the first fully integrated optical spectrometer in standard bulk CMOS process without custom fabrication, postprocessing, or any external optical passive structure such as lenses, gratings, collimators, or mirrors. The architecture exploits metal interconnect layers available in CMOS processes with subwavelength feature sizes to guide, manipulate, control, diffract light, integrated photodetector, and read-out circuitry to detect dispersed light, and then back-end signal processing for robust spectral estimation. The chip, realized in bulk 65-nm low power-CMOS process, measures 0.64 mm 0.56 mm in active area, and achieves 1.4 nm in peak detection accuracy for continuous wave excitations between 500 and 830 nm. This paper demonstrates the ability to use these metal-optic nanostructures to miniaturize complex optical instrumentation into a new class of optics-free CMOS-based systems-on-chip in the visible and near-IR for various sensing and imaging applications.

  18. OCAPI: a multidirectional multichannel polarizing imager

    NASA Astrophysics Data System (ADS)

    Le Naour, C.; Eichen, G.; Léon, J. F.

    2017-11-01

    OCAPI (Optical Carbonaceous and anthropogenic Aerosols Pathfinder Instrument) is an imager dedicated to the observation of the spectral, directional and polarized signatures of the solar radiance reflected by the Earth-Atmosphere system. The measurements are used to study air quality and pollution by tracking aerosol quantity, types and circulation at various scales in the visible range. The main characteristics of OCAPI are a 110° along track and cross track field of view, eight polarized channels distributed between 320 and 2130 nm. The resolution is 4 x 4 km2 in the visible and the shortwave infrared (SWIR) range, and 10 x 10 km2 in the UV. The instrumental concept is derived from POLDER and PARASOL with additional channels in the UV and SWIR to better determine aerosol properties and constrain Earth surface and cloud contributions in the detected signal. It is based on three wide field-ofview telecentric optics (UV, Visible and SWIR), a rotating wheel bearing spectral and polarized filters and two dimensional detector arrays at the focal plane of the optics. The instrument requirements, concept and budgets are presented.

  19. Geometrical calibration of an AOTF hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.

  20. Investigation of Joint Visibility Between SAR and Optical Images of Urban Environments

    NASA Astrophysics Data System (ADS)

    Hughes, L. H.; Auer, S.; Schmitt, M.

    2018-05-01

    In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55 % of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25 % of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.

  1. Use of near-infrared video recording system for the detection of freeze damaged citrus leaves

    NASA Technical Reports Server (NTRS)

    Escobar, D. E.; Bowen, R. L.; Gausman, H. W.; Cooper, G. (Principal Investigator)

    1982-01-01

    A video recording system with a visible light blocking filter to give sensitivity in the 0.78 m to 1.1 m waveband detected freeze-damaged citrus leaves rapidly. With this technique, the time to analyze images can be decreased from about one day for conventional photography to less than one hour for video recording.

  2. Nonlinear imaging (NIM) of barely visible impact damage (BVID) in composite panels using a semi and full air-coupled linear and nonlinear ultrasound technique

    NASA Astrophysics Data System (ADS)

    Malfense Fierro, Gian Piero; Meo, Michele

    2018-03-01

    Two non-contact methods were evaluated to address the reliability and reproducibility concerns affecting industry adoption of nonlinear ultrasound techniques for non-destructive testing and evaluation (NDT/E) purposes. A semi and a fully air-coupled linear and nonlinear ultrasound method was evaluated by testing for barely visible impact damage (BVID) in composite materials. Air coupled systems provide various advantages over contact driven systems; such as: ease of inspection, no contact and lubrication issues and a great potential for non-uniform geometry evaluation. The semi air-coupled setup used a suction attached piezoelectric transducer to excite the sample and an array of low-cost microphones to capture the signal over the inspection area, while the second method focused on a purely air-coupled setup, using an air-coupled transducer to excite the structure and capture the signal. One of the issues facing nonlinear and any air-coupled systems is transferring enough energy to stimulate wave propagation and in the case of nonlinear ultrasound; damage regions. Results for both methods provided nonlinear imaging (NIM) of damage regions using a sweep excitation methodology, with the semi aircoupled system providing clearer results.

  3. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  4. Gun muzzle flash detection using a CMOS single photon avalanche diode

    NASA Astrophysics Data System (ADS)

    Merhav, Tomer; Savuskan, Vitali; Nemirovsky, Yael

    2013-10-01

    Si based sensors, in particular CMOS Image sensors, have revolutionized low cost imaging systems but to date have hardly been considered as possible candidates for gun muzzle flash detection, due to performance limitations, and low SNR in the visible spectrum. In this study, a CMOS Single Photon Avalanche Diode (SPAD) module is used to record and sample muzzle flash events in the visible spectrum, from representative weapons, common on the modern battlefield. SPADs possess two crucial properties for muzzle flash imaging - Namely, very high photon detection sensitivity, coupled with a unique ability to convert the optical signal to a digital signal at the source pixel, thus practically eliminating readout noise. This enables high sampling frequencies in the kilohertz range without SNR degradation, in contrast to regular CMOS image sensors. To date, the SPAD has not been utilized for flash detection in an uncontrolled environment, such as gun muzzle flash detection. Gun propellant manufacturers use alkali salts to suppress secondary flashes ignited during the muzzle flash event. Common alkali salts are compounds based on Potassium or Sodium, with spectral emission lines around 769nm and 589nm, respectively. A narrow band filter around the Potassium emission doublet is used in this study to favor the muzzle flash signal over solar radiation. This research will demonstrate the SPAD's ability to accurately sample and reconstruct the temporal behavior of the muzzle flash in the visible wavelength under the specified imaging conditions. The reconstructed signal is clearly distinguishable from background clutter, through exploitation of flash temporal characteristics.

  5. Visual appearance of wind turbine tower at long range measured using imaging system

    NASA Astrophysics Data System (ADS)

    Gustafsson, K. Ove S.; Möller, Sebastian

    2013-10-01

    Wind turbine towers affect the visual appearance of the landscape, as an example in the touristic woodland of Dalecarlia, and the fear is that the visual impact will be too negative to the important tourist trade. The landscape analysis, developed by municipalities around Lake Siljan, limited expansion of wind power, due to the strong visual impression of wind turbine towers. In order to facilitate the assessment of the visual impact of towers a view, from Tällberg, over the ring of height on the other side of Lake Siljan, has been photographed every ten minutes for a year (34,727 images, about 65% of the possible number during a year). Four towers are possible to see in the photos, three of them have been used in the assessment of visual impression. This contribution presents a method to assess visibility of wind turbine towers from photographs, describing the measuring situation (location and equipment) as well as the analytical method and results of the analysis. The towers are possible to see in about 48% of analyzed images taken during daytime with the used equipment. During the summer (winter) months the towers were apparent in 49% (46%) of the images. At least one red warning light was possible to see on towers in about 66% of the night images. One conclusion of this work is that the method to assess the visibility within digital photographs and translate it into the equivalent of a normal eye can only provide an upper limit for visibility of an object.

  6. Tower testing of a 64W shortwave infrared supercontinuum laser for use as a hyperspectral imaging illuminator

    NASA Astrophysics Data System (ADS)

    Meola, Joseph; Absi, Anthony; Islam, Mohammed N.; Peterson, Lauren M.; Ke, Kevin; Freeman, Michael J.; Ifaraguerri, Agustin I.

    2014-06-01

    Hyperspectral imaging systems are currently used for numerous activities related to spectral identification of materials. These passive imaging systems rely on naturally reflected/emitted radiation as the source of the signal. Thermal infrared systems measure radiation emitted from objects in the scene. As such, they can operate at both day and night. However, visible through shortwave infrared systems measure solar illumination reflected from objects. As a result, their use is limited to daytime applications. Omni Sciences has produced high powered broadband shortwave infrared super-continuum laser illuminators. A 64-watt breadboard system was recently packaged and tested at Wright-Patterson Air Force Base to gauge beam quality and to serve as a proof-of-concept for potential use as an illuminator for a hyperspectral receiver. The laser illuminator was placed in a tower and directed along a 1.4km slant path to various target materials with reflected radiation measured with both a broadband camera and a hyperspectral imaging system to gauge performance.

  7. Real-time millimeter-wave imaging radiometer for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.

    1994-07-01

    ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.

  8. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  9. Near-infrared imaging of developmental defects in dental enamel.

    PubMed

    Hirasuna, Krista; Fried, Daniel; Darling, Cynthia L

    2008-01-01

    Polarization-sensitive optical coherence tomography (PS-OCT) and near-infrared (NIR) imaging are promising new technologies under development for monitoring early carious lesions. Fluorosis is a growing problem in the United States, and the more prevalent mild fluorosis can be visually mistaken for early enamel demineralization. Unfortunately, there is little quantitative information available regarding the differences in optical properties of sound enamel, enamel developmental defects, and caries. Thirty extracted human teeth with various degrees of suspected fluorosis were imaged using PS-OCT and NIR. An InGaAs camera and a NIR diode laser were used to measure the optical attenuation through transverse tooth sections (approximately 200 microm). A digital microradiography system was used to quantify the enamel defect severity by measurement of the relative mineral loss for comparison with optical scattering measurements. Developmental defects were clearly visible in the polarization-resolved OCT images, demonstrating that PS-OCT can be used to nondestructively measure the depth and possible severity of the defects. Enamel defects on whole teeth that could be imaged with high contrast with visible light were transparent in the NIR. This study suggests that PS-OCT and NIR methods may potentially be used as tools to assess the severity and extent of enamel defects.

  10. Saturnian atmospheric storm

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A vortex, or large atmospheric storm, is visible at 74` north latitude in this color composite of Voyager 2 Saturn images obtained Aug. 25 from a range of 1 million kilometers (620,000 miles). Three wide-angle-camera images taken through green, orange and blue filters were used. This particular storm system seems to be one of the few large-scale structures in Saturn's polar region, which otherwise is dominated by much smaller-scale features suggesting convection. The darker, bluish structure (upper right) oriented east to west strongly suggests the presence of a jet stream at these high latitudes. The appearance of a strong east-west flow in the polar-region could have a major influence on models of Saturn's atmospheric circulation, if the existence of such a flow can be substantiated in time sequences of Voyager images. The smallest features visible in this photograph are about 20 km. (12 mi.) across. The Voyager project is managed for NASA by the Jet Propulsion Laboratory, Pasadena, Calif.

  11. Chrominance watermark for mobile applications

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Rogers, Eliot; James, Dan

    2010-01-01

    Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.

  12. Simulated NASA Satellite Data Products for the NOAA Integrated Coral Reef Observation Network/Coral Reef Early Warning System

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    This RPC (Rapid Prototyping Capability) experiment will demonstrate the use of VIIRS (Visible/Infrared Imager/Radiometer Suite) and LDCM (Landsat Data Continuity Mission) sensor data as significant input to the NOAA (National Oceanic and Atmospheric Administration) ICON/ CREWS (Integrated Coral Reef Observation System/Coral Reef Early Warning System). The project affects the Coastal Management Program Element of the Applied Sciences Program.

  13. Evaluation of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Moderate Resolution Imaging Spectrometer (MODIS) measures of live fuel moisture and fuel condition in a shrubland ecosystem in southern California

    Treesearch

    D. A. Roberts; P.E. Dennison; S. Peterson; S. Sweeney; J. Rechel

    2006-01-01

    Dynamic changes in live fuel moisture (LFM) and fuel condition modify fire danger in shrublands. We investigated the empirical relationship between field-measured LFM and remotely sensed greenness and moisture measures from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the Moderate Resolution Imaging Spectrometer (MODIS). Key goals were to assess the...

  14. Study of spin-scan imaging for outer planets missions: Executive summary

    NASA Technical Reports Server (NTRS)

    Russell, E. E.; Chandos, R. A.; Kodak, J. C.; Pellicori, S. F.; Tomasko, M. G.

    1974-01-01

    The development and characteristics of spin-scan imagers for interplanetary exploration are discussed. The spin-scan imaging photopolarimeter instruments of Pioneer 10 and 11 are described. In addition to the imaging function, the instruments are also used in a faint-light mode to take sky maps in both radiance and polarization. The performance of a visible-infrared spin-scan radiometer (VISSR), which operates in both visible and infrared wavelengths, is reported.

  15. Making Heat Visible

    PubMed Central

    Goodhew, Julie; Pahl, Sabine; Auburn, Tim; Goodhew, Steve

    2015-01-01

    Householders play a role in energy conservation through the decisions they make about purchases and installations such as insulation, and through their habitual behavior. The present U.K. study investigated the effect of thermal imaging technology on energy conservation, by measuring the behavioral effect after householders viewed images of heat escaping from or cold air entering their homes. In Study 1 (n = 43), householders who received a thermal image reduced their energy use at a 1-year follow-up, whereas householders who received a carbon footprint audit and a non-intervention control demonstrated no change. In Study 2 (n = 87), householders were nearly 5 times more likely to install draught proofing measures after seeing a thermal image. The effect was especially pronounced for actions that addressed an issue visible in the images. Findings indicate that using thermal imaging to make heat loss visible can promote energy conservation. PMID:26635418

  16. A radiographic and tomographic imaging system integrated into a medical linear accelerator for localization of bone and soft-tissue targets.

    PubMed

    Jaffray, D A; Drake, D G; Moreau, M; Martinez, A A; Wong, J W

    1999-10-01

    Dose escalation in conformal radiation therapy requires accurate field placement. Electronic portal imaging devices are used to verify field placement but are limited by the low subject contrast of bony anatomy at megavoltage (MV) energies, the large imaging dose, and the small size of the radiation fields. In this article, we describe the in-house modification of a medical linear accelerator to provide radiographic and tomographic localization of bone and soft-tissue targets in the reference frame of the accelerator. This system separates the verification of beam delivery (machine settings, field shaping) from patient and target localization. A kilovoltage (kV) x-ray source is mounted on the drum assembly of an Elekta SL-20 medical linear accelerator, maintaining the same isocenter as the treatment beam with the central axis at 90 degrees to the treatment beam axis. The x-ray tube is powered by a high-frequency generator and can be retracted to the drum-face. Two CCD-based fluoroscopic imaging systems are mounted on the accelerator to collect MV and kV radiographic images. The system is also capable of cone-beam tomographic imaging at both MV and kV energies. The gain stages of the two imaging systems have been modeled to assess imaging performance. The contrast-resolution of the kV and MV systems was measured using a contrast-detail (C-D) phantom. The dosimetric advantage of using the kV imaging system over the MV system for the detection of bone-like objects is quantified for a specific imaging geometry using a C-D phantom. Accurate guidance of the treatment beam requires registration of the imaging and treatment coordinate systems. The mechanical characteristics of the treatment and imaging gantries are examined to determine a localizing precision assuming an unambiguous object. MV and kV radiographs of patients receiving radiation therapy are acquired to demonstrate the radiographic performance of the system. The tomographic performance is demonstrated on phantoms using both the MV and the kV imaging system, and the visibility of soft-tissue targets is assessed. Characterization of the gains in the two systems demonstrates that the MV system is x-ray quantum noise-limited at very low spatial frequencies; this is not the case for the kV system. The estimates of gain used in the model are validated by measurements of the total gain in each system. Contrast-detail measurements demonstrate that the MV system is capable of detecting subject contrasts of less than 0.1% (at 6 and 18 MV). A comparison of the kV and MV contrast-detail performance indicates that equivalent bony object detection can be achieved with the kV system at significantly lower doses (factors of 40 and 90 lower than for 6 and 18 MV, respectively). The tomographic performance of the system is promising; soft-tissue visibility is demonstrated at relatively low imaging doses (3 cGy) using four laboratory rats. We have integrated a kV radiographic and tomographic imaging system with a medical linear accelerator to allow localization of bone and soft-tissue structures in the reference frame of the accelerator. Modeling and experiments have demonstrated the feasibility of acquiring high-quality radiographic and tomographic images at acceptable imaging doses. Full integration of the kV and MV imaging systems with the treatment machine will allow on-line radiographic and tomographic guidance of field placement.

  17. Appearance of the canine meninges in subtraction magnetic resonance images.

    PubMed

    Lamb, Christopher R; Lam, Richard; Keenihan, Erin K; Frean, Stephen

    2014-01-01

    The canine meninges are not visible as discrete structures in noncontrast magnetic resonance (MR) images, and are incompletely visualized in T1-weighted, postgadolinium images, reportedly appearing as short, thin curvilinear segments with minimal enhancement. Subtraction imaging facilitates detection of enhancement of tissues, hence may increase the conspicuity of meninges. The aim of the present study was to describe qualitatively the appearance of canine meninges in subtraction MR images obtained using a dynamic technique. Images were reviewed of 10 consecutive dogs that had dynamic pre- and postgadolinium T1W imaging of the brain that was interpreted as normal, and had normal cerebrospinal fluid. Image-anatomic correlation was facilitated by dissection and histologic examination of two canine cadavers. Meningeal enhancement was relatively inconspicuous in postgadolinium T1-weighted images, but was clearly visible in subtraction images of all dogs. Enhancement was visible as faint, small-rounded foci compatible with vessels seen end on within the sulci, a series of larger rounded foci compatible with vessels of variable caliber on the dorsal aspect of the cerebral cortex, and a continuous thin zone of moderate enhancement around the brain. Superimposition of color-encoded subtraction images on pregadolinium T1- and T2-weighted images facilitated localization of the origin of enhancement, which appeared to be predominantly dural, with relatively few leptomeningeal structures visible. Dynamic subtraction MR imaging should be considered for inclusion in clinical brain MR protocols because of the possibility that its use may increase sensitivity for lesions affecting the meninges. © 2014 American College of Veterinary Radiology.

  18. Instantaneous field of view and spatial sampling of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

    NASA Technical Reports Server (NTRS)

    Chrien, Thomas G.; Green, Robert O.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures the upwelling radiance in 224 spectral bands. These data are required as images of approximately 11 by up to 100 km in extent at nominally 20 by 20 meter spatial resolution. In this paper we describe the underlying spatial sampling and spatial response characteristics of AVIRIS.

  19. Impact of patient weight on tumor visibility based on human-shaped phantom simulation study in PET imaging system

    NASA Astrophysics Data System (ADS)

    Musarudin, M.; Saripan, M. I.; Mashohor, S.; Saad, W. H. M.; Nordin, A. J.; Hashim, S.

    2015-10-01

    Energy window technique has been implemented in all positron emission tomography (PET) imaging protocol, with the aim to remove the unwanted low energy photons. Current practices in our institution however are performed by using default energy threshold level regardless of the weight of the patient. Phantom size, which represents the size of the patient's body, is the factor that determined the level of scatter fraction during PET imaging. Thus, the motivation of this study is to determine the optimum energy threshold level for different sizes of human-shaped phantom, to represent underweight, normal, overweight and obese patients. In this study, the scanner was modeled by using Monte Carlo code, version MCNP5. Five different sizes of elliptical-cylinder shaped of human-sized phantoms with diameter ranged from 15 to 30 cm were modeled. The tumor was modeled by a cylindrical line source filled with 1.02 MeV positron emitters at the center of the phantom. Various energy window widths, in the ranged of 10-50% were implemented to the data. In conclusion, the phantom mass volume did influence the scatter fraction within the volume. Bigger phantom caused more scattering events and thus led to coincidence counts lost. We evaluated the impact of phantom sizes on the sensitivity and visibility of the simulated models. Implementation of wider energy window improved the sensitivity of the system and retained the coincidence photons lost. Visibility of the tumor improved as an appropriate energy window implemented for the different sizes of phantom.

  20. Spots on Saturn

    NASA Image and Video Library

    2004-04-02

    As Cassini closes in on Saturn, its view is growing sharper with time and now reveals new atmospheric features in the planet's southern hemisphere. Atmospheric features, such as two small, faint dark spots, visible in the planet's southern hemisphere, will become clearer in the coming months. The spots are located at 38 degrees south latitude. The spacecraft's narrow angle camera took several exposures on March 8, 2004, which have been combined to create this natural color image. The image contrast and colors have been slightly enhanced to aid visibility. Moons visible in the lower half of this image are: Mimas (398 kilometers, or 247 miles across) at left, just below the rings; Dione (1,118 kilometers, or 695 miles across) at left, below Mimas; and Enceladus (499 kilometers, 310 miles across) at right. The moons had their brightness enhanced to aid visibility. The spacecraft was then 56.4 million kilometers (35 million miles) from Saturn, or slightly more than one-third of the distance from Earth to the Sun. The image scale is approximately 338 kilometers (210 miles) per pixel. The planet is 23 percent larger in this image than it appeared in the preceding color image, taken four weeks earlier. http://photojournal.jpl.nasa.gov/catalog/PIA05385

Top