Sample records for infrared scene projector

  1. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  2. Polarization measurements made on LFRA and OASIS emitter arrays

    NASA Astrophysics Data System (ADS)

    Geske, Jon; Sparkman, Kevin; Oleson, Jim; Laveigne, Joe; Sieglinger, Breck; Marlow, Steve; Lowry, Heard; Burns, James

    2008-04-01

    Polarization is increasingly being considered as a method of discrimination in passive sensing applications. In this paper the degree of polarization of the thermal emission from the emitter arrays of two new Santa Barbara Infrared (SBIR) micro-bolometer resistor array scene projectors was characterized at ambient temperature and at 77 K. The emitter arrays characterized were from the Large Format Resistive Array (LFRA) and the Optimized Arrays for Space-Background Infrared Simulation (OASIS) scene projectors. This paper reports the results of this testing.

  3. Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.

    NASA Astrophysics Data System (ADS)

    Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott

    2015-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.

  4. High-temperature MIRAGE XL (LFRA) IRSP system development

    NASA Astrophysics Data System (ADS)

    McHugh, Steve; Franks, Greg; LaVeigne, Joe

    2017-05-01

    The development of very-large format infrared detector arrays has challenged the IR scene projector community to develop larger-format infrared emitter arrays. Many scene projector applications also require much higher simulated temperatures than can be generated with current technology. This paper will present an overview of resistive emitterbased (broadband) IR scene projector system development, as well as describe recent progress in emitter materials and pixel designs applicable for legacy MIRAGE XL Systems to achieve apparent temperatures >1000K in the MWIR. These new high temperature MIRAGE XL (LFRA) Digital Emitter Engines (DEE) will be "plug and play" equivalent with legacy MIRAGE XL DEEs, the rest of the system is reusable. Under the High Temperature Dynamic Resistive Array (HDRA) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>2k x 2k) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1500 K. These new emitter materials can be utilized with legacy RIICs to produce pixels that can achieve 7X the radiance of the legacy systems with low cost and low risk. A 'scalable' Read-In Integrated Circuit (RIIC) is also being developed under the same HDRA program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. These quilted arrays can be fabricated in any N x M size in 512 steps.

  5. Description of the dynamic infrared background/target simulator (DIBS)

    NASA Astrophysics Data System (ADS)

    Lujan, Ignacio

    1988-01-01

    The purpose of the Dynamic Infrared Background/Target Simulator (DIBS) is to project dynamic infrared scenes to a test sensor; e.g., a missile seeker that is sensitive to infrared energy. The projected scene will include target(s) and background. This system was designed to present flicker-free infrared scenes in the 8 micron to 12 micron wavelength region. The major subassemblies of the DIBS are the laser write system (LWS), vanadium dioxide modulator assembly, scene data buffer (SDB), and the optical image translator (OIT). This paper describes the overall concept and design of the infrared scene projector followed by some details of the LWS and VO2 modulator. Also presented are brief descriptions of the SDB and OIT.

  6. Achieving ultra-high temperatures with a resistive emitter array

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott

    2016-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.

  7. A read-in IC for infrared scene projectors with voltage drop compensation for improved uniformity of emitter current

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    This paper proposes a read-in integrated circuit (RIIC) for infrared scene projectors, which compensates for the voltage drops in ground lines in order to improve the uniformity of the emitter current. A current output digital-to-analog converter is utilized to convert digital scene data into scene data currents. The unit cells in the array receive the scene data current and convert it into data voltage, which simultaneously self-adjusts to account for the voltage drop in the ground line in order to generate the desired emitter current independently of variations in the ground voltage. A 32 × 32 RIIC unit cell array was designed and fabricated using a 0.18-μm CMOS process. The experimental results demonstrate that the proposed RIIC can output a maximum emitter current of 150 μA and compensate for a voltage drop in the ground line of up to 500 mV under a 3.3-V supply. The uniformity of the emitter current is significantly improved compared to that of a conventional RIIC.

  8. Development of infrared scene projectors for testing fire-fighter cameras

    NASA Astrophysics Data System (ADS)

    Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.

    2008-04-01

    We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.

  9. Hybrid-mode read-in integrated circuit for infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.

  10. LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve

    2010-04-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.

  11. Development of a high-definition IR LED scene projector

    NASA Astrophysics Data System (ADS)

    Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David

    2016-05-01

    Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.

  12. Programmable personality interface for the dynamic infrared scene generator (IRSG2)

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Mobley, Scott B.; Mayhall, Anthony J.; Braselton, William J.

    1998-07-01

    As scene generator platforms begin to rely specifically on commercial off-the-shelf (COTS) hardware and software components, the need for high speed programmable personality interfaces (PPIs) are required for interfacing to Infrared (IR) flight computer/processors and complex IR projectors in the hardware-in-the-loop (HWIL) simulation facilities. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost effective PPIs to interface to COTS scene generators. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC) researchers have developed such a PPI to reside between the AMCOM MRDEC IR Scene Generator (IRSG) and either a missile flight computer or the dynamic Laser Diode Array Projector (LDAP). AMCOM MRDEC has developed several PPIs for the first and second generation IRSGs (IRSG1 and IRSG2), which are based on Silicon Graphics Incorporated (SGI) Onyx and Onyx2 computers with Reality Engine 2 (RE2) and Infinite Reality (IR/IR2) graphics engines. This paper provides an overview of PPIs designed, integrated, tested, and verified at AMCOM MRDEC, specifically the IRSG2's PPI.

  13. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  14. A dual-waveband dynamic IR scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang

    2016-10-01

    Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.

  15. Visible-Infrared Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew

    2013-01-01

    The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.

  16. Review of infrared scene projector technology-1993

    NASA Astrophysics Data System (ADS)

    Driggers, Ronald G.; Barnard, Kenneth J.; Burroughs, E. E.; Deep, Raymond G.; Williams, Owen M.

    1994-07-01

    The importance of testing IR imagers and missile seekers with realistic IR scenes warrants a review of the current technologies used in dynamic infrared scene projection. These technologies include resistive arrays, deformable mirror arrays, mirror membrane devices, liquid crystal light valves, laser writers, laser diode arrays, and CRTs. Other methods include frustrated total internal reflection, thermoelectric devices, galvanic cells, Bly cells, and vanadium dioxide. A description of each technology is presented along with a discussion of their relative benefits and disadvantages. The current state of each methodology is also summarized. Finally, the methods are compared and contrasted in terms of their performance parameters.

  17. MIRAGE: system overview and status

    NASA Astrophysics Data System (ADS)

    Robinson, Richard M.; Oleson, Jim; Rubin, Lane; McHugh, Stephen W.

    2000-07-01

    Santa Barbara Infrared's (SBIR) MIRAGE (Multispectral InfraRed Animation Generation Equipment) is a state-of-the-art dynamic infrared scene projector system. Imagery from the first MIRAGE system was presented to the scene simulation community during last year's SPIE AeroSense 99 Symposium. Since that time, SBIR has delivered five MIRAGE systems. This paper will provide an overview of the MIRAGE system and discuss the current status of the MIRAGE. Included is an update of system hardware, and the current configuration. Proposed upgrades to this configuration and options will be discussed. Updates on the latest installations, applications and measured data will also be presented.

  18. High accuracy LADAR scene projector calibration sensor development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.

    2008-04-01

    A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.

  19. Optical system design of dynamic infrared scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Fu, Yuegang; Liu, Zhiying; Li, Yandong

    2014-09-01

    Infrared scene simulator is now widely used to simulate infrared scene practicality in the laboratory, which can greatly reduce the research cost of the optical electrical system and offer economical experiment environment. With the advantage of large dynamic range and high spatial resolution, dynamic infrared projection technology, which is the key part of the infrared scene simulator, based on digital micro-mirror device (DMD) has been rapidly developed and widely applied in recent years. In this paper, the principle of the digital micro-mirror device is briefly introduced and the characteristics of the DLP (Digital Light Procession) system based on digital micromirror device (DMD) are analyzed. The projection system worked at 8~12μm with 1024×768 pixel DMD is designed by ZEMAX. The MTF curve is close to the diffraction limited curve and the radius of the spot diagram is smaller than that of the airy disk. The result indicates that the system meets the design requirements.

  20. Demonstration of the Wide-Field Imaging Interferometer Testbed Using a Calibrated Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew R.; Leisawitz, David; Maher, Steve; Rinehart, Stephen

    2012-01-01

    The Wide-field Imaging Interferometer testbed (WIIT) at NASA's Goddard Space Flight Center uses a dual-Michelson interferometric technique. The WIIT combines stellar interferometry with Fourier-transform interferometry to produce high-resolution spatial-spectral data over a large field-of-view. This combined technique could be employed on future NASA missions such as the Space Infrared Interferometric Telescope (SPIRIT) and the Sub-millimeter Probe of the Evolution of Cosmic Structure (SPECS). While both SPIRIT and SPECS would operate at far-infrared wavelengths, the WIIT demonstrates the dual-interferometry technique at visible wavelengths. The WIIT will produce hyperspectral image data, so a true hyperspectral object is necessary. A calibrated hyperspectral image projector (CHIP) has been constructed to provide such an object. The CHIP uses Digital Light Processing (DLP) technology to produce customized, spectrally-diverse scenes. CHIP scenes will have approximately 1.6-micron spatial resolution and the capability of . producing arbitrary spectra in the band between 380 nm and 1.6 microns, with approximately 5-nm spectral resolution. Each pixel in the scene can take on a unique spectrum. Spectral calibration is achieved with an onboard fiber-coupled spectrometer. In this paper we describe the operation of the CHIP. Results from the WIIT observations of CHIP scenes will also be presented.

  1. Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)

    NASA Astrophysics Data System (ADS)

    Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le

    2017-02-01

    A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.

  2. Cold Background, Flight Motion Simulator Mounted, Infrared Scene Projectors Developed for use in AMRDEC Hardware-in-the-Loop

    DTIC Science & Technology

    2004-01-01

    cooled below –40ºC with the ultra low temperature chiller operating at –50ºC. At these low temperatures, elastomer compounds (i.e. nylon hose and o...projector hardware. Consideration of steel braided Teflon hose or even a thin wall flexible steel hose will be made for future operation of the YUGO...Cajon VCR vacuum port on the bottom of the array using a metal gasket. This change eliminated one elastomer seal that was most likely to fail at low

  3. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  4. Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results

    NASA Astrophysics Data System (ADS)

    Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  5. Advances in iterative non-uniformity correction techniques for infrared scene projection

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian

    2015-05-01

    Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.

  6. Design of two-DMD based zoom MW and LW dual-band IRSP using pixel fusion

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Xu, Xiping; Qiao, Yang

    2018-06-01

    In order to test the anti-jamming ability of mid-wave infrared (MWIR) and long-wave infrared (LWIR) dual-band imaging system, a zoom mid-wave (MW) and long-wave (LW) dual-band infrared scene projector (IRSP) based on two-digital micro-mirror device (DMD) was designed by using a projection method of pixel fusion. Two illumination systems, which illuminate the two DMDs directly with Kohler telecentric beam respectively, were combined with projection system by a spatial layout way. The distances of projection entrance pupil and illumination exit pupil were also analyzed separately. MWIR and LWIR virtual scenes were generated respectively by two DMDs and fused by a dichroic beam combiner (DBC), resulting in two radiation distributions in projected image. The optical performance of each component was evaluated by ray tracing simulations. Apparent temperature and image contrast were demonstrated by imaging experiments. On the basis of test and simulation results, the aberrations of optical system were well corrected, and the quality of projected image meets test requirements.

  7. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  8. Characterization of quantum well laser diodes for application within the AMRDEC HWIL facilities

    NASA Astrophysics Data System (ADS)

    Saylor, Daniel A.; Bender, Matt; Cantey, Thomas M.; Beasley, D. B.; Buford, Jim A.

    2004-08-01

    The U.S. Army's Research, Development, and Engineering Command's (RDECOM) Aviation and Missile Research, Development, and Engineering Center (AMRDEC) provides Hardware-in-the-Loop (HWIL) test support to numerous tactical and theatre missile programs. Critical to the successful execution of these tests is the state-of-the-art technologies employed in the visible and infrared scene projector systems. This paper describes the results of characterizations tests performed on new mid-wave infrared (MWIR) quantum well laser diodes recently provided to AMRDEC by the Naval Research Labs and Sarnoff Industries. These lasers provide a +10X imrovement in MWIR output over the previous technology of lead-salt laser diodes. Performance data on output power, linearity, and solid-angle coverage are presented. A discussion of the laser packages is also provided.

  9. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  10. Using the 16MM. Stop-Frame Projector to Teach Film Technique.

    ERIC Educational Resources Information Center

    Head, James

    1969-01-01

    English is concerned with language experience, and because much of today's "language" is experienced through electronic media--television, movies, radio--film courses fall within the English curriculum. A stop-frame projector is essential for classroom analysis of such film devices as framing, establishing shots, and scene composition. Framing is…

  11. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  12. Computer-generated, calligraphic, full-spectrum color system for visual simulation landing approach maneuvers

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1975-01-01

    The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.

  13. Research of infrared laser based pavement imaging and crack detection

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang

    2013-08-01

    Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.

  14. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  15. Low-cost real-time infrared scene generation for image projection and signal injection

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; King, David E.; Bowden, Mark H.

    1998-07-01

    As cost becomes an increasingly important factor in the development and testing of Infrared sensors and flight computer/processors, the need for accurate hardware-in-the- loop (HWIL) simulations is critical. In the past, expensive and complex dedicated scene generation hardware was needed to attain the fidelity necessary for accurate testing. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost-effective replacements for dedicated scene generators. These new scene generators are mainly constructed from commercial-off-the-shelf (COTS) hardware and software components. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC), researchers have developed such a dynamic IR scene generator (IRSG) built around COTS hardware and software. The IRSG is used to provide dynamic inputs to an IR scene projector for in-band seeker testing and for direct signal injection into the seeker or processor electronics. AMCOM MRDEC has developed a second generation IRSG, namely IRSG2, using the latest Silicon Graphics Incorporated (SGI) Onyx2 with Infinite Reality graphics. As reported in previous papers, the SGI Onyx Reality Engine 2 is the platform of the original IRSG that is now referred to as IRSG1. IRSG1 has been in operation and used daily for the past three years on several IR projection and signal injection HWIL programs. Using this second generation IRSG, frame rates have increased from 120 Hz to 400 Hz and intensity resolution from 12 bits to 16 bits. The key features of the IRSGs are real time missile frame rates and frame sizes, dynamic missile-to-target(s) viewpoint updated each frame in real-time by a six-degree-of- freedom (6DOF) system under test (SUT) simulation, multiple dynamic objects (e.g. targets, terrain/background, countermeasures, and atmospheric effects), latency compensation, point-to-extended source anti-aliased targets, and sensor modeling effects. This paper provides a comparison between the IRSG1 and IRSG2 systems and focuses on the IRSG software, real time features, and database development tools.

  16. Electrostatic artificial eyelid actuator as an analog micromirror device

    NASA Astrophysics Data System (ADS)

    Goodwin, Scott H.; Dausch, David E.; Solomon, Steven L.; Lamvik, Michael K.

    2005-05-01

    An electrostatic MEMS actuator is described for use as an analog micromirror device (AMD) for high performance, broadband, hardware-in-the-loop (HWIL) scene generation. Current state-of-the-art technology is based on resistively heated pixel arrays. As these arrays drive to the higher scene temperatures required by missile defense scenarios, the power required to drive the large format resistive arrays will ultimately become prohibitive. Existing digital micromirrors (DMD) are, in principle, capable of generating the required scene irradiances, but suffer from limited dynamic range, resolution and flicker effects. An AMD would be free of these limitations, and so represents a viable alternative for high performance UV/VIS/IR scene generation. An electrostatic flexible film actuator technology, developed for use as "artificial eyelid" shutters for focal plane sensors to protect against damaging radiation, is suitable as an AMD for analog control of projection irradiance. In shutter applications, the artificial eyelid actuator contained radius of curvature as low as 25um and operated at high voltage (>200V). Recent testing suggests that these devices are capable of analog operation as reflective microcantilever mirrors appropriate for scene projector systems. In this case, the device would possess larger radius and operate at lower voltages (20-50V). Additionally, frame rates have been measured at greater than 5kHz for continuous operation. The paper will describe the artificial eyelid technology, preliminary measurements of analog test pixels, and design aspects related to application for scene projection systems. We believe this technology will enable AMD projectors with at least 5122 spatial resolution, non-temporally-modulated output, and pixel response times of <1.25ms.

  17. Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing

    DTIC Science & Technology

    2014-06-01

    price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure

  18. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  19. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  20. Preprocessing of region of interest localization based on local surface curvature analysis for three-dimensional reconstruction with multiresolution

    NASA Astrophysics Data System (ADS)

    Li, Wanjing; Schütze, Rainer; Böhler, Martin; Boochs, Frank; Marzani, Franck S.; Voisin, Yvon

    2009-06-01

    We present an approach to integrate a preprocessing step of the region of interest (ROI) localization into 3-D scanners (laser or stereoscopic). The definite objective is to make the 3-D scanner intelligent enough to localize rapidly in the scene, during the preprocessing phase, the regions with high surface curvature, so that precise scanning will be done only in these regions instead of in the whole scene. In this way, the scanning time can be largely reduced, and the results contain only pertinent data. To test its feasibility and efficiency, we simulated the preprocessing process under an active stereoscopic system composed of two cameras and a video projector. The ROI localization is done in an iterative way. First, the video projector projects a regular point pattern in the scene, and then the pattern is modified iteratively according to the local surface curvature of each reconstructed 3-D point. Finally, the last pattern is used to determine the ROI. Our experiments showed that with this approach, the system is capable to localize all types of objects, including small objects with small depth.

  1. Augmented reality 3D display using head-mounted projectors and transparent retro-reflective screen

    NASA Astrophysics Data System (ADS)

    Soomro, Shoaib R.; Urey, Hakan

    2017-02-01

    A 3D augmented reality display is proposed that can provide glass-free stereo parallax using a highly transparent projection screen. The proposed display is based on a transparent retro-reflective screen and a pair of laser pico projectors placed close to the viewer's head. The retro-reflective screen directs incident light towards its source with little scattering so that each of the viewer's eyes only perceives the content projected by the associated projector. Each projector displays one of the two components (left or right channel) of stereo content. The retro-reflective nature of screen provides high brightness compared to the regular diffused screens. The partially patterned retro-reflective material on clear substrate introduces optical transparency and facilitates the viewer to see the real-world scene on the other side of screen. The working principle and design of the proposed see-through 3D display are presented. A tabletop prototype consisting of an in-house fabricated 60×40cm2 see-through retro-reflective screen and a pair of 30 lumen pico-projectors with custom 3D printed housings is demonstrated. Geometric calibration between projectors and optimal viewing conditions (eye box size, eye-to-projector distance) are discussed. The display performance is evaluated by measuring the brightness and crosstalk for each eye. The screen provides high brightness (up to 300 cd/m2 per eye) using 30 lumens mobile projectors while maintaining the 75% screen transparency. The crosstalk between left and right views is measured as <10% at the optimum distance of 125-175 cm, which is within acceptable range.

  2. Programmable spectral engine design of hyperspectral image projectors based on digital micro-mirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang

    2017-02-01

    Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.

  3. Nonnegative Matrix Factorization for Efficient Hyperspectral Image Projection

    NASA Technical Reports Server (NTRS)

    Iacchetta, Alexander S.; Fienup, James R.; Leisawitz, David T.; Bolcar, Matthew R.

    2015-01-01

    Hyperspectral imaging for remote sensing has prompted development of hyperspectral image projectors that can be used to characterize hyperspectral imaging cameras and techniques in the lab. One such emerging astronomical hyperspectral imaging technique is wide-field double-Fourier interferometry. NASA's current, state-of-the-art, Wide-field Imaging Interferometry Testbed (WIIT) uses a Calibrated Hyperspectral Image Projector (CHIP) to generate test scenes and provide a more complete understanding of wide-field double-Fourier interferometry. Given enough time, the CHIP is capable of projecting scenes with astronomically realistic spatial and spectral complexity. However, this would require a very lengthy data collection process. For accurate but time-efficient projection of complicated hyperspectral images with the CHIP, the field must be decomposed both spectrally and spatially in a way that provides a favorable trade-off between accurately projecting the hyperspectral image and the time required for data collection. We apply nonnegative matrix factorization (NMF) to decompose hyperspectral astronomical datacubes into eigenspectra and eigenimages that allow time-efficient projection with the CHIP. Included is a brief analysis of NMF parameters that affect accuracy, including the number of eigenspectra and eigenimages used to approximate the hyperspectral image to be projected. For the chosen field, the normalized mean squared synthesis error is under 0.01 with just 8 eigenspectra. NMF of hyperspectral astronomical fields better utilizes the CHIP's capabilities, providing time-efficient and accurate representations of astronomical scenes to be imaged with the WIIT.

  4. Can IR scene projectors reduce total system cost?

    NASA Astrophysics Data System (ADS)

    Ginn, Robert; Solomon, Steven

    2006-05-01

    There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.

  5. Toward the light field display: autostereoscopic rendering via a cluster of projectors.

    PubMed

    Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher

    2008-01-01

    Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.

  6. Study on general design of dual-DMD based infrared two-band scene simulation system

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  7. Research on the generation of the background with sea and sky in infrared scene

    NASA Astrophysics Data System (ADS)

    Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li

    2008-03-01

    It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.

  8. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  9. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  10. Generative technique for dynamic infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Cao, Zhiguo; Zhang, Tianxu

    2001-09-01

    The generative technique of the dynamic infrared image was discussed in this paper. Because infrared sensor differs from CCD camera in imaging mechanism, it generates the infrared image by incepting the infrared radiation of scene (including target and background). The infrared imaging sensor is affected deeply by the atmospheric radiation, the environmental radiation and the attenuation of atmospheric radiation transfers. Therefore at first in this paper the imaging influence of all kinds of the radiations was analyzed and the calculation formula of radiation was provided, in addition, the passive scene and the active scene were analyzed separately. Then the methods of calculation in the passive scene were provided, and the functions of the scene model, the atmospheric transmission model and the material physical attribute databases were explained. Secondly based on the infrared imaging model, the design idea, the achievable way and the software frame for the simulation software of the infrared image sequence were introduced in SGI workstation. Under the guidance of the idea above, in the third segment of the paper an example of simulative infrared image sequences was presented, which used the sea and sky as background and used the warship as target and used the aircraft as eye point. At last the simulation synthetically was evaluated and the betterment scheme was presented.

  11. Fast-response IR spatial light modulators with a polymer network liquid crystal

    NASA Astrophysics Data System (ADS)

    Peng, Fenglin; Chen, Haiwei; Tripathi, Suvagata; Twieg, Robert J.; Wu, Shin-Tson

    2015-03-01

    Liquid crystals (LC) have widespread applications for amplitude modulation (e.g. flat panel displays) and phase modulation (e.g. beam steering). For phase modulation, a 2π phase modulo is required. To extend the electro-optic application into infrared region (MWIR and LWIR), several key technical challenges have to be overcome: 1. low absorption loss, 2. high birefringence, 3. low operation voltage, and 4. fast response time. After three decades of extensive development, an increasing number of IR devices adopting LC technology have been demonstrated, such as liquid crystal waveguide, laser beam steering at 1.55μm and 10.6 μm, spatial light modulator in the MWIR (3~5μm) band, dynamic scene projectors for infrared seekers in the LWIR (8~12μm) band. However, several fundamental molecular vibration bands and overtones exist in the MWIR and LWIR regions, which contribute to high absorption coefficient and hinder its widespread application. Therefore, the inherent absorption loss becomes a major concern for IR devices. To suppress IR absorption, several approaches have been investigated: 1) Employing thin cell gap by choosing a high birefringence liquid crystal mixture; 2) Shifting the absorption bands outside the spectral region of interest by deuteration, fluorination and chlorination; 3) Reducing the overlap vibration bands by using shorter alkyl chain compounds. In this paper, we report some chlorinated LC compounds and mixtures with a low absorption loss in the near infrared and MWIR regions. To achieve fast response time, we have demonstrated a polymer network liquid crystal with 2π phase change at MWIR and response time less than 5 ms.

  12. Projection type transparent 3D display using active screen

    NASA Astrophysics Data System (ADS)

    Kamoshita, Hiroki; Yendo, Tomohiro

    2015-05-01

    Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.

  13. Real-time generation of infrared ocean scene based on GPU

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu

    2007-12-01

    Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.

  14. NMR and NQR parameters of ethanol crystal

    NASA Astrophysics Data System (ADS)

    Milinković, M.; Bilalbegović, G.

    2012-04-01

    Electric field gradients and chemical shielding tensors of the stable monoclinic crystal phase of ethanol are computed. The projector-augmented wave (PAW) and gauge-including projector-augmented wave (GIPAW) models in the periodic plane-wave density functional theory are used. The crystal data from X-ray measurements, as well as the structures where either all atomic, or only hydrogen atom positions are optimized in the density functional theory are analyzed. These structural models are also studied by including the semi-empirical van der Waals correction to the density functional theory. Infrared spectra of these five crystal models are calculated.

  15. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  16. Tachistoscopic illumination and masking of real scenes.

    PubMed

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  17. Enhanced LWIR NUC using an uncooled microbolometer camera

    NASA Astrophysics Data System (ADS)

    LaVeigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian

    2011-06-01

    Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector, and the best NUC is performed in the band of interest for the sensor being tested. While cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, similar cooled, large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Some of these challenges were discussed in a previous paper. In this discussion, we report results from a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution were the main problems, and have been solved by the implementation of several compensation strategies as well as hardware used to stabilize the camera. In addition, other processes have been developed to allow iterative improvement as well as supporting changes of the post-NUC lookup table without requiring re-collection of the pre-NUC data with the new LUT in use.

  18. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  19. A 360-degree floating 3D display based on light field regeneration.

    PubMed

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  20. Utilization of DIRSIG in support of real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Brown, Scott D.

    2000-07-01

    Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.

  1. Robust infrared targets tracking with covariance matrix representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian

    2009-07-01

    Robust infrared target tracking is an important and challenging research topic in many military and security applications, such as infrared imaging guidance, infrared reconnaissance, scene surveillance, etc. To effectively tackle the nonlinear and non-Gaussian state estimation problems, particle filtering is introduced to construct the theory framework of infrared target tracking. Under this framework, the observation probabilistic model is one of main factors for infrared targets tracking performance. In order to improve the tracking performance, covariance matrices are introduced to represent infrared targets with the multi-features. The observation probabilistic model can be constructed by computing the distance between the reference target's and the target samples' covariance matrix. Because the covariance matrix provides a natural tool for integrating multiple features, and is scale and illumination independent, target representation with covariance matrices can hold strong discriminating ability and robustness. Two experimental results demonstrate the proposed method is effective and robust for different infrared target tracking, such as the sensor ego-motion scene, and the sea-clutter scene.

  2. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  3. Work step indication with grid-pattern projection for demented senior people.

    PubMed

    Uranishi, Yuki; Yamamoto, Goshiro; Asghar, Zeeshan; Pulli, Petri; Kato, Hirokazu; Oshiro, Osamu

    2013-01-01

    This paper proposes a work step indication method for supporting daily work with a grid-pattern projection. To support an independent life of demented senior people, it is desirable that an instruction is easy to understand visually and not complicated. The proposed method in this paper uses a range image sensor and a camera in addition to a projector. A 3D geometry of a target scene is measured by the range image sensor, and the grid-pattern is projected onto the scene directly. Direct projection of the work step is easier to be associated with the target objects around the assisted person, and the grid-pattern is a solution to indicate the spatial instruction. A prototype has been implemented and has demonstrated that the proposed grid-pattern projection is easy to show the work step.

  4. Sea-Based Infrared Scene Interpretation by Background Type Classification and Coastal Region Detection for Small Target Detection

    PubMed Central

    Kim, Sungho

    2015-01-01

    Sea-based infrared search and track (IRST) is important for homeland security by detecting missiles and asymmetric boats. This paper proposes a novel scheme to interpret various infrared scenes by classifying the infrared background types and detecting the coastal regions in omni-directional images. The background type or region-selective small infrared target detector should be deployed to maximize the detection rate and to minimize the number of false alarms. A spatial filter-based small target detector is suitable for identifying stationary incoming targets in remote sea areas with sky only. Many false detections can occur if there is an image sector containing a coastal region, due to ground clutter and the difficulty in finding true targets using the same spatial filter-based detector. A temporal filter-based detector was used to handle these problems. Therefore, the scene type and coastal region information is critical to the success of IRST in real-world applications. In this paper, the infrared scene type was determined using the relationships between the sensor line-of-sight (LOS) and a horizontal line in an image. The proposed coastal region detector can be activated if the background type of the probing sector is determined to be a coastal region. Coastal regions can be detected by fusing the region map and curve map. The experimental results on real infrared images highlight the feasibility of the proposed sea-based scene interpretation. In addition, the effects of the proposed scheme were analyzed further by applying region-adaptive small target detection. PMID:26404308

  5. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  6. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  7. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  8. Scene-based nonuniformity correction technique for infrared focal-plane arrays.

    PubMed

    Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong

    2009-04-20

    A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.

  9. Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen.

    PubMed

    Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie

    2010-09-10

    We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.

  10. Tachistoscopic illumination and masking of real scenes

    PubMed Central

    Chichka, David; Philbeck, John W.; Gajewski, Daniel A.

    2014-01-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and the directional locations of objects in 2D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues may be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This paper describes the system and the timing characteristics of each component. Verification of the ability to control exposure to time scales as low as a few milliseconds is demonstrated. PMID:24519496

  11. Confocal retinal imaging using a digital light projector with a near infrared VCSEL source

    NASA Astrophysics Data System (ADS)

    Muller, Matthew S.; Elsner, Ann E.

    2018-02-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1" LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging.

  12. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  13. Real time imaging and infrared background scene analysis using the Naval Postgraduate School infrared search and target designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Bernier, Jean D.

    1991-09-01

    The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.

  14. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  15. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  16. Confocal Retinal Imaging Using a Digital Light Projector with a Near Infrared VCSEL Source

    PubMed Central

    Muller, Matthew S.; Elsner, Ann E.

    2018-01-01

    A custom near infrared VCSEL source has been implemented in a confocal non-mydriatic retinal camera, the Digital Light Ophthalmoscope (DLO). The use of near infrared light improves patient comfort, avoids pupil constriction, penetrates the deeper retina, and does not mask visual stimuli. The DLO performs confocal imaging by synchronizing a sequence of lines displayed with a digital micromirror device to the rolling shutter exposure of a 2D CMOS camera. Real-time software adjustments enable multiply scattered light imaging, which rapidly and cost-effectively emphasizes drusen and other scattering disruptions in the deeper retina. A separate 5.1″ LCD display provides customizable visible stimuli for vision experiments with simultaneous near infrared imaging. PMID:29899586

  17. Enhancement of Stereo Imagery by Artificial Texture Projection Generated Using a LIDAR

    NASA Astrophysics Data System (ADS)

    Veitch-Michaelis, Joshua; Muller, Jan-Peter; Walton, David; Storey, Jonathan; Foster, Michael; Crutchley, Benjamin

    2016-06-01

    Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  18. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-01-01

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320

  19. Enhancement of multispectral thermal infrared images - Decorrelation contrast stretching

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.

    1992-01-01

    Decorrelation contrast stretching is an effective method for displaying information from multispectral thermal infrared (TIR) images. The technique involves transformation of the data to principle components ('decorrelation'), independent contrast 'stretching' of data from the new 'decorrelated' image bands, and retransformation of the stretched data back to the approximate original axes, based on the inverse of the principle component rotation. The enhancement is robust in that colors of the same scene components are similar in enhanced images of similar scenes, or the same scene imaged at different times. Decorrelation contrast stretching is reviewed in the context of other enhancements applied to TIR images.

  20. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.

  1. Single Spatial-Mode Room-Temperature-Operated 3.0 to 3.4 micrometer Diode Lasers

    NASA Technical Reports Server (NTRS)

    Frez, Clifford F.; Soibel, Alexander; Belenky, Gregory; Shterengas, Leon; Kipshidze, Gela

    2010-01-01

    Compact, highly efficient, 3.0 to 3.4 m light emitters are in demand for spectroscopic analysis and identification of chemical substances (including methane and formaldehyde), infrared countermeasures technologies, and development of advanced infrared scene projectors. The need for these light emitters can be currently addressed either by bulky solid-state light emitters with limited power conversion efficiency, or cooled Interband Cascade (IC) semiconductor lasers. Researchers here have developed a breakthrough approach to fabrication of diode mid-IR lasers that have several advantages over IC lasers used for the Mars 2009 mission. This breakthrough is due to a novel design utilizing the strain-engineered quantum-well (QW) active region and quinternary barriers, and due to optimization of device material composition and growth conditions (growth temperatures and rates). However, in their present form, these GaSb-based laser diodes cannot be directly used as a part of sensor systems. The device spectrum is too broad to perform spectroscopic analysis of gas species, and operating currents and voltages are too high. In the current work, the emitters were fabricated as narrow-ridge waveguide index-guided lasers rather than broad stripe-gain guided multimode Fabry-Perot (FP) lasers as was done previously. These narrow-ridge waveguide mid-IR lasers exhibit much lower power consumptions, and can operate in a single spatial mode that is necessary for demonstration of single-mode distributed feedback (DBF) devices for spectroscopic applications. These lasers will enable a new generation of compact, tunable diode laser spectrometers with lower power consumption, reduced complexity, and significantly reduced development costs. These lasers can be used for the detection of HCN, C2H2, methane, and ethane.

  2. Efficient green lasers for high-resolution scanning micro-projector displays

    NASA Astrophysics Data System (ADS)

    Bhatia, Vikram; Bauco, Anthony S.; Oubei, Hassan M.; Loeber, David A. S.

    2010-02-01

    Laser-based projectors are gaining increased acceptance in mobile device market due to their low power consumption, superior image quality and small size. The basic configuration of such micro-projectors is a miniature mirror that creates an image by raster scanning the collinear red, blue and green laser beams that are individually modulated on a pixel-bypixel basis. The image resolution of these displays can be limited by the modulation bandwidth of the laser sources, and the modulation speed of the green laser has been one of the key limitations in the development of these displays. We will discuss how this limitation is fundamental to the architecture of many laser designs and then present a green laser configuration which overcomes these difficulties. In this green laser architecture infra-red light from a distributed Bragg-reflector (DBR) laser diode undergoes conversion to green light in a waveguided second harmonic generator (SHG) crystal. The direct doubling in a single pass through the SHG crystal allows the device to operate at the large modulation bandwidth of the DBR laser. We demonstrate that the resultant product has a small footprint (<0.7 cc envelope volume), high efficiency (>9% electrical-to-optical conversion) and large modulation bandwidth (>100 MHz).

  3. Use of a compact range approach to evaluate rf and dual-mode missiles

    NASA Astrophysics Data System (ADS)

    Willis, Kenneth E.; Weiss, Yosef

    2000-07-01

    This paper describes a hardware-in-the-loop (HWIL) system developed for testing Radio Frequency (RF), Infra-Red (IR), and Dual-Mode missile seekers. The system consists of a unique hydraulic five-axis (three seeker axes plus two target axes) Flight Motion Table (FMT), an off-axis parabolic reflector, and electronics required to generate the signals to the RF feeds. RF energy that simulates the target is fed into the reflector from three orthogonal feeds mounted on the inner target axis, at the focal point area of the parabolic reflector. The parabolic reflector, together with the three RF feeds (the Compact Range), effectively produces a far-field image of the target. Both FMT target axis motion and electronic control of the RF beams (deflection) modify the simulated line-of-sight target angles. Multiple targets, glint, multi-path, ECM, and clutter can be introduced electronically. To evaluate dual-mode seekers, the center section of the parabolic reflector is replaced with an IR- transparent, but RF-reflective section. An IR scene projector mounts to the FMT target axes, with its image focused on the intersection of the FMT seeker axes. The system eliminates the need for a large anechoic chamber and 'Target Wall' or target motion system used with conventional HWIL systems. This reduces acquisition and operating costs of the facility.

  4. The hands of the projectionist.

    PubMed

    Cartwright, Lisa

    2011-09-01

    This essay considers the work of projection and the hand of the projectionist as important components of the social space of the cinema as it comes into being in the nineteenth century and the early decades of the twentieth. I bring the concept ofMaurice Merleau-Ponty on the place of the body as an entity that applies itself to the world "like a hand to an instrument" into a discussion of the pre-cinematic projector as an instrument that we can interpret as evidence of the experience of the work of the projectionist in the spirit of film theory and media archaeology, moving work on instrumentation in a different direction from the analysis of the work of the black box in laboratory studies. Projection is described as a psychological as well as a mechanical process. It is suggested that we interpret the projector not simply in its activity as it projects films, but in its movement from site to site and in the workings of the hand of its operator behind the scenes. This account suggests a different perspective on the cinematic turn of the nineteenth century, a concept typically approached through the study of the image, the look, the camera, and the screen.

  5. A Multi-Wavelength Thermal Infrared and Reflectance Scene Simulation Model

    NASA Technical Reports Server (NTRS)

    Ballard, J. R., Jr.; Smith, J. A.; Smith, David E. (Technical Monitor)

    2002-01-01

    Several theoretical calculations are presented and our approach discussed for simulating overall composite scene thermal infrared exitance and canopy bidirectional reflectance of a forest canopy. Calculations are performed for selected wavelength bands of the DOE Multispectral Thermal Imagery and comparisons with atmospherically corrected MTI imagery are underway. NASA EO-1 Hyperion observations also are available and the favorable comparison of our reflective model results with these data are reported elsewhere.

  6. Scene-based nonuniformity correction algorithm based on interframe registration.

    PubMed

    Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao

    2011-06-01

    In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.

  7. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  8. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  9. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  10. A projective surgical navigation system for cancer resection

    NASA Astrophysics Data System (ADS)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  11. A two-dimensional location method based on digital micromirror device used in interactive projection systems

    NASA Astrophysics Data System (ADS)

    Chen, Liangjun; Ni, Kai; Zhou, Qian; Cheng, Xuemin; Ma, Jianshe; Gao, Yuan; Sun, Peng; Li, Yi; Liu, Minxia

    2010-11-01

    Interactive projection systems based on CCD/CMOS have been greatly developed in recent years. They can locate and trace the movement of a pen equipped with an infrared LED, and displays the user's handwriting or react to the user's operation in real time. However, a major shortcoming is that the location device and the projector are independent with each other, including both the optical system and the control system. This requires construction of two optical systems, calibration of the differences between the projector view and the camera view, and also synchronization between two control systems, etc. In this paper, we introduced a two-dimensional location method based on digital micro-mirror device (DMD). The DMD is used as the display device and the position detector in turn. By serially flipping the micro-mirrors on the DMD according to a specially designed scheme and monitoring the reflected light energy, the image spot of the infrared LED can be quickly located. By using this method, the same optical system as well as the DMD can be multiplexed for projection and location, which will reduce the complexity and cost of the whole system. Furthermore, this method can also achieve high positioning accuracy and sampling rates. The results of location experiments are given.

  12. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  13. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  14. Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing

    NASA Astrophysics Data System (ADS)

    Sedlar, Michael F.; Griffith, Jerry A.

    1988-07-01

    This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.

  15. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  16. Advanced Wavefront Sensing and Control Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell

    2010-01-01

    The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.

  17. A high-resolution three-dimensional far-infrared thermal and true-color imaging system for medical applications.

    PubMed

    Cheng, Victor S; Bai, Jinfen; Chen, Yazhu

    2009-11-01

    As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.

  18. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  19. Research on simulation technology of full-path infrared tail flame tracking of photoelectric theodolite in complicated environment

    NASA Astrophysics Data System (ADS)

    Wu, Hai-ying; Zhang, San-xi; Liu, Biao; Yue, Peng; Weng, Ying-hui

    2018-02-01

    The photoelectric theodolite is an important scheme to realize the tracking, detection, quantitative measurement and performance evaluation of weapon systems in ordnance test range. With the improvement of stability requirements for target tracking in complex environment, infrared scene simulation with high sense of reality and complex interference has become an indispensable technical way to evaluate the track performance of photoelectric theodolite. And the tail flame is the most important infrared radiation source of the weapon system. The dynamic tail flame with high reality is a key element for the photoelectric theodolite infrared scene simulation and imaging tracking test. In this paper, an infrared simulation method for the full-path tracking of tail flame by photoelectric theodolite is proposed aiming at the faint boundary, irregular, multi-regulated points. In this work, real tail images are employed. Simultaneously, infrared texture conversion technology is used to generate DDS texture for a particle system map. Thus, dynamic real-time tail flame simulation results with high fidelity from the theodolite perspective can be gained in the tracking process.

  20. Computed tomography imaging spectrometer (CTIS) with 2D reflective grating for ultraviolet to long-wave infrared detection especially useful for surveying transient events

    NASA Technical Reports Server (NTRS)

    Muller, Richard E. (Inventor); Mouroulis, Pantazis Z. (Inventor); Maker, Paul D. (Inventor); Wilson, Daniel W. (Inventor)

    2003-01-01

    The optical system of this invention is an unique type of imaging spectrometer, i.e. an instrument that can determine the spectra of all points in a two-dimensional scene. The general type of imaging spectrometer under which this invention falls has been termed a computed-tomography imaging spectrometer (CTIS). CTIS's have the ability to perform spectral imaging of scenes containing rapidly moving objects or evolving features, hereafter referred to as transient scenes. This invention, a reflective CTIS with an unique two-dimensional reflective grating, can operate in any wavelength band from the ultraviolet through long-wave infrared. Although this spectrometer is especially useful for rapidly occurring events it is also useful for investigation of some slow moving phenomena as in the life sciences.

  1. Segmented Separable Footprint Projector for Digital Breast Tomosynthesis and Its application for Subpixel Reconstruction

    PubMed Central

    Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping

    2017-01-01

    Purpose Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. Methods We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast-tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. Results The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres, but also produced more smooth and continuous reconstructed images whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon’s algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. Conclusions The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction. PMID:28058719

  2. Smart windows with functions of reflective display and indoor temperature-control

    NASA Astrophysics Data System (ADS)

    Lee, I.-Hui; Chao, Yu-Ching; Hsu, Chih-Cheng; Chang, Liang-Chao; Chiu, Tien-Lung; Lee, Jiunn-Yih; Kao, Fu-Jen; Lee, Chih-Kung; Lee, Jiun-Haw

    2010-02-01

    In this paper, a switchable window based on cholestreric liquid crystal (CLC) was demonstrated. Under different applied voltages, incoming light at visible and infrared wavelengths was modulated, respectively. A mixture of CLC with a nematic liquid crystal and a chiral dopant selectively reflected infrared light without bias, which effectively reduced the indoor temperature under sunlight illumination. At this time, transmission at visible range was kept at high and the windows looked transparent. With increasing the voltage to 15V, CLC changed to focal conic state and can be used as a reflective display, a privacy window, or a screen for projector. Under a high voltage (30V), homeotropic state was achieved. At this time, both infrared and visible light can transmit which acted as a normal window, which permitted infrared spectrum of winter sunlight to enter the room so as to reduce the heating requirement. Such a device can be used as a switchable window in smart buildings, green houses and windshields.

  3. Improving AIRS Radiance Spectra in High Contrast Scenes Using MODIS

    NASA Technical Reports Server (NTRS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.

    2015-01-01

    The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 microns with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match AIRS and MODIS radiances in non-uniform scenes.

  4. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  5. High resolution observations of low contrast phenomena from an Advanced Geosynchronous Platform (AGP)

    NASA Technical Reports Server (NTRS)

    Maxwell, M. S.

    1984-01-01

    Present technology allows radiometric monitoring of the Earth, ocean and atmosphere from a geosynchronous platform with good spatial, spectral and temporal resolution. The proposed system could provide a capability for multispectral remote sensing with a 50 m nadir spatial resolution in the visible bands, 250 m in the 4 micron band and 1 km in the 11 micron thermal infrared band. The diffraction limited telescope has a 1 m aperture, a 10 m focal length (with a shorter focal length in the infrared) and linear and area arrays of detectors. The diffraction limited resolution applies to scenes of any brightness but for a dark low contrast scenes, the good signal to noise ratio of the system contribute to the observation capability. The capabilities of the AGP system are assessed for quantitative observations of ocean scenes. Instrument and ground system configuration are presented and projected sensor capabilities are analyzed.

  6. Synthesis multi-projector content for multi-projector three dimension display using a layered representation

    NASA Astrophysics Data System (ADS)

    Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua

    2014-11-01

    Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.

  7. Real-time range acquisition by adaptive structured light.

    PubMed

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  8. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-07-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and -B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through one year of simultaneous nadir overpass (SNO) observations to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the longwave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both Polar and Tropical SNOs. The combined global SNO datasets indicate that, the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 comparison spectral regions and they range from 0.15 to 0.21 K in the remaining 4 spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  9. Radiometric consistency assessment of hyperspectral infrared sounders

    NASA Astrophysics Data System (ADS)

    Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.

    2015-11-01

    The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark data sets for both intercalibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and MetOp-B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through simultaneous nadir overpass (SNO) observations in 2013, to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the long-wave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both polar and tropical SNOs. The combined global SNO data sets indicate that the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 spectral regions and they range from 0.15 to 0.21 K in the remaining four spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.

  10. Background Characterization Techniques For Pattern Recognition Applications

    NASA Astrophysics Data System (ADS)

    Noah, Meg A.; Noah, Paul V.; Schroeder, John W.; Kessler, Bernard V.; Chernick, Julian A.

    1989-08-01

    The Department of Defense has a requirement to investigate technologies for the detection of air and ground vehicles in a clutter environment. The use of autonomous systems using infrared, visible, and millimeter wave detectors has the potential to meet DOD's needs. In general, however, the hard-ware technology (large detector arrays with high sensitivity) has outpaced the development of processing techniques and software. In a complex background scene the "problem" is as much one of clutter rejection as it is target detection. The work described in this paper has investigated a new, and innovative, methodology for background clutter characterization, target detection and target identification. The approach uses multivariate statistical analysis to evaluate a set of image metrics applied to infrared cloud imagery and terrain clutter scenes. The techniques are applied to two distinct problems: the characterization of atmospheric water vapor cloud scenes for the Navy's Infrared Search and Track (IRST) applications to support the Infrared Modeling Measurement and Analysis Program (IRAMMP); and the detection of ground vehicles for the Army's Autonomous Homing Munitions (AHM) problems. This work was sponsored under two separate Small Business Innovative Research (SBIR) programs by the Naval Surface Warfare Center (NSWC), White Oak MD, and the Army Material Systems Analysis Activity at Aberdeen Proving Ground MD. The software described in this paper will be available from the respective contract technical representatives.

  11. Self-adaptive calibration for staring infrared sensors

    NASA Astrophysics Data System (ADS)

    Kendall, William B.; Stocker, Alan D.

    1993-10-01

    This paper presents a new, self-adaptive technique for the correlation of non-uniformities (fixed-pattern noise) in high-density infrared focal-plane detector arrays. We have developed a new approach to non-uniformity correction in which we use multiple image frames of the scene itself, and take advantage of the aim-point wander caused by jitter, residual tracking errors, or deliberately induced motion. Such wander causes each detector in the array to view multiple scene elements, and each scene element to be viewed by multiple detectors. It is therefore possible to formulate (and solve) a set of simultaneous equations from which correction parameters can be computed for the detectors. We have tested our approach with actual images collected by the ARPA-sponsored MUSIC infrared sensor. For these tests we employed a 60-frame (0.75-second) sequence of terrain images for which an out-of-date calibration was deliberately used. The sensor was aimed at a point on the ground via an operator-assisted tracking system having a maximum aim point wander on the order of ten pixels. With these data, we were able to improve the calibration accuracy by a factor of approximately 100.

  12. The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation

    NASA Astrophysics Data System (ADS)

    Woods, Andrew J.; Rourke, Tegan

    2007-02-01

    A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.

  13. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  14. Reducing Heating In High-Speed Cinematography

    NASA Technical Reports Server (NTRS)

    Slater, Howard A.

    1989-01-01

    Infrared-absorbing and infrared-reflecting glass filters simple and effective means for reducing rise in temperature during high-speed motion-picture photography. "Hot-mirror" and "cold-mirror" configurations, employed in projection of images, helps prevent excessive heating of scenes by powerful lamps used in high-speed photography.

  15. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  16. Strategies for Buying and Maintaining Audio Visual Equipment.

    ERIC Educational Resources Information Center

    Kalmbach, John A.; Kruzel, Richard D.

    1989-01-01

    Presents guidelines for purchasing and maintaining audiovisual equipment most often used in the classroom. Highlights include selecting a vendor; purchasing associations; preventive maintenance; optical equipment, including overhead projectors, slide projectors, movie projectors, and filmstrip projectors; and electromagnetic equipment, including…

  17. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  18. Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors.

    PubMed

    Leathers, Robert; Downes, Trijntje; Priest, Richard

    2005-06-27

    We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.

  19. The Audio-Visual Equipment Directory. Seventeenth Edition.

    ERIC Educational Resources Information Center

    Herickes, Sally, Ed.

    The following types of audiovisual equipment are catalogued: 8 mm. and 16 mm. motion picture projectors, filmstrip and sound filmstrip projectors, slide projectors, random access projection equipment, opaque, overhead, and micro-projectors, record players, special purpose projection equipment, audio tape recorders and players, audio tape…

  20. How to Choose--and Use--Motion Picture Projectors

    ERIC Educational Resources Information Center

    Training, 1976

    1976-01-01

    Suggests techniques for selecting super 8 and 16mm movie projectors for various training and communication needs. Charts list various characteristics for 17 models of 8mm projectors with built-in screen, 7 models without screen, and 33 models of 16mm projectors. (WL)

  1. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  2. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  3. When All Else Fails--KICK! Trouble Shooting, Preventive Maintenance, and Auxiliary Equipment.

    ERIC Educational Resources Information Center

    Beasley, Augie E.; Palmer, Carolyn G.

    The guidelines presented in this manual for the maintenance and repair of media equipment and materials provide information on optical systems, slide projectors, film projectors, overhead projectors, record players, cassette recorders, public address systems, opaque projectors, laminators, motion picture films, and cassette tapes. A list of…

  4. Using a Video Projector for Color-Mixing Demonstrations.

    ERIC Educational Resources Information Center

    Bartels, Richard A.

    1982-01-01

    Suggestions are provided for using color television projector systems to demonstrate color mixing. With such a projector, manipulation of the three primary colors can be done by simply covering and uncovering the three separate beams. In addition, projector systems serve as good examples in studying geometrical optics. (Author/JN)

  5. Media Manual (How to Use Media Equipment).

    ERIC Educational Resources Information Center

    Jones, Nancy

    Using a workbook format, this guide explains the use of seven types of audiovisual equipment: overhead projector, Bell and Howell 16mm motion picture projector, Dukane filmstrip projector, record player, Kodak slide projector, Wollensak 2552 tape recorder, and JVC videocassette color video system. An introductory section includes (1) a media…

  6. Spectral Variability among Rocks in Visible and Near Infrared Multispectral Pancam Data Collected at Gusev Crater: Examinations using Spectral Mixture Analysis and Related Techniques

    NASA Technical Reports Server (NTRS)

    Farrand, W. H.; Bell, J. F., III; Johnson, J. R.; Squyres, S. W.; Soderblom, J.; Ming, D. W.

    2006-01-01

    Visible and Near Infrared (VNIR) multispectral observations of rocks made by the Mars Exploration Rover Spirit s Panoramic camera (Pancam) have been analysed using a spectral mixture analysis (SMA) methodology. Scenes have been examined from the Gusev crater plains into the Columbia Hills. Most scenes on the plains and in the Columbia Hills could be modeled as three endmember mixtures of a bright material, rock, and shade. Scenes of rocks disturbed by the rover s Rock Abrasion Tool (RAT) required additional endmembers. In the Columbia Hills there were a number of scenes in which additional rock endmembers were required. The SMA methodology identified relatively dust-free areas on undisturbed rock surfaces, as well as spectrally unique areas on RAT abraded rocks. Spectral parameters from these areas were examined and six spectral classes were identified. These classes are named after a type rock or area and are: Adirondack, Lower West Spur, Clovis, Wishstone, Peace, and Watchtower. These classes are discriminable based, primarily, on near-infrared (NIR) spectral parameters. Clovis and Watchtower class rocks appear more oxidized than Wishstone class rocks and Adirondack basalts based on their having higher 535 nm band depths. Comparison of the spectral parameters of these Gusev crater rocks to parameters of glass-dominated basaltic tuffs indicates correspondence between measurements of Clovis and Watchtower classes, but divergence for the Wishstone class rocks which appear to have a higher fraction of crystalline ferrous iron bearing phases. Despite a high sulfur content, the rock Peace has NIR properties resembling plains basalts.

  7. The review on infrared image restoration techniques

    NASA Astrophysics Data System (ADS)

    Li, Sijian; Fan, Xiang; Zhu, Bin Cheng; Zheng, Dong

    2016-11-01

    The goal of infrared image restoration is to reconstruct an original scene from a degraded observation. The restoration process in the application of infrared wavelengths, however, still has numerous research possibilities. In order to give people a comprehensive knowledge of infrared image restoration, the degradation factors divided into two major categories of noise and blur. Many kinds of infrared image restoration method were overviewed. Mathematical background and theoretical basis of infrared image restoration technology, and the limitations or insufficiency of existing methods were discussed. After the survey, the direction and prospects of infrared image restoration technology for the future development were forecast and put forward.

  8. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  9. Feasibility study of utilizing ultraportable projectors for endoscopic video display (with videos).

    PubMed

    Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip

    2014-10-01

    Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.

  10. Experiment research on infrared targets signature in mid and long IR spectral bands

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting

    2013-09-01

    Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.

  11. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  12. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328

  13. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.

  14. Micromachined single-level nonplanar polycrystalline SiGe thermal microemitters for infrared dynamic scene projection

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Malyutenko, O. Yu.; Leonov, V.; Van Hoof, C.

    2009-05-01

    The technology for self-supported membraneless polycrystalline SiGe thermal microemitters, their design, and performance are presented. The 128-element arrays with a fill factor of 88% and a 2.5-μm-thick resonant cavity have been grown by low-pressure chemical vapor deposition and fabricated using surface micromachining technology. The 200-nm-thick 60×60 μm2 emitting pixels enforced with a U-shape profile pattern demonstrate a thermal time constant of 2-7 ms and an apparent temperature of 700 K in the 3-5 and 8-12 μm atmospheric transparency windows. The application of the devices to the infrared dynamic scene simulation and their benefit over conventional planar membrane-supported emitters are discussed.

  15. The structure of red-infrared scattergrams of semivegetated landscapes

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1988-01-01

    A physically based linear stochastic geometric canopy soil reflectance model is presented for characterizing spatial variability of semivegetated landscapes at subpixel and regional scales. Landscapes are conceptualized as stochastic geometric surfaces, incorporating not only the variability in geometric elements, but also the variability in vegetation and soil background reflectance which can be important in some scenes. The model is used to investigate several possible mechanisms which contribute to the often observed characteristic triangular shape of red-infrared scattergrams of semivegetated landscapes. Scattergrams of simulated and semivegetated scenes are analyzed with respect to the scales of the satellite pixel and subpixel components. Analysis of actual aerial radiometric data of a pecan orchard is presented in comparison with ground observations as preliminary confirmation of the theoretical results.

  16. The structure of red-infrared scattergrams of semivegetated landscapes

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A physically based linear stochastic geometric canopy soil reflectance model is presented for characterizing spatial variability of semivegetated landscapes at subpixel and regional scales. Landscapes are conceptualized as stochastic geometric surfaces, incorporating not only the variability in geometric elements, but also the variability in vegetation and soil background reflectance which can be important in some scenes. The model is used to investigate several possible mechanisms which contribute to the often observed characteristic triangular shape of red-infrared scattergrams of semivegetated landscapes. Scattergrams of simulated semivegetated scenes are analyzed with respect to the scales of the satellite pixel and subpixel components. Analysis of actual aerial radiometric data of a pecan orchard is presented in comparison with ground observations as preliminary confirmation of the theoretical results.

  17. The Wide-Field Imaging Interferometry Testbed (WIIT): Recent Progress in the Simulation and Synthesis of WIIT Data

    NASA Technical Reports Server (NTRS)

    Juanola Parramon, Roser; Leisawitz, David T.; Bolcar, Matthew R.; Maher, Stephen F.; Rinehart, Stephen A.; Iacchetta, Alex; Savini, Giorgio

    2016-01-01

    The Wide-field Imaging Interferometry Testbed (WIIT) is a double Fourier (DF) interferometer operating at optical wavelengths, and provides data that are highly representative of those from a space-based far-infrared interferometer like SPIRIT. This testbed has been used to measure both a geometrically simple test scene and an astronomically representative test scene. Here we present the simulation of recent WIIT measurements using FIInS (the Far-infrared Interferometer Instrument Simulator), the main goal of which is to simulate both the input and the output of a DFM system. FIInS has been modified to perform calculations at optical wavelengths and to include an extended field of view due to the presence of a detector array.

  18. Spectral variability among rocks in visible and near-infrared mustispectral Pancam data collected at Gusev crater: Examinations using spectral mixture analysis and related techniques

    USGS Publications Warehouse

    Farrand, W. H.; Bell, J.F.; Johnson, J. R.; Squyres, S. W.; Soderblom, J.; Ming, D. W.

    2006-01-01

    Visible and near-infrared (VNIR) multispectral observations of rocks made by the Mars Exploration Rover Spirit's Panoramic camera (Pancam) have been analyzed using a spectral mixture analysis (SMA) methodology. Scenes have been examined from the Gusev crater plains into the Columbia Hills. Most scenes on the plains and in the Columbia Hills could be modeled as three end-member mixtures of a bright material, rock, and shade. Scenes of rocks disturbed by the rover's Rock Abrasion Tool (RAT) required additional end-members. In the Columbia Hills, there were a number of scenes in which additional rock end-members were required. The SMA methodology identified relatively dust-free areas on undisturbed rock surfaces as well as spectrally unique areas on RAT abraded rocks. Spectral parameters from these areas were examined, and six spectral classes were identified. These classes are named after a type rock or area and are Adirondack, Lower West Spur, Clovis, Wishstone, Peace, and Watchtower. These classes are discriminable based, primarily, on near-infrared (NIR) spectral parameters. Clovis and Watchtower class rocks appear more oxidized than Wishstone class rocks and Adirondack basalts based on their having higher 535 nm band depths. Comparison of the spectral parameters of these Gusev crater rocks to parameters of glass-dominated basaltic tuffs indicates correspondence between measurements of Clovis and Watchtower classes but divergence for the Wishstone class rocks, which appear to have a higher fraction of crystalline ferrous iron-bearing phases. Despite a high sulfur content, the rock Peace has NIR properties resembling plains basalts. Copyright 2006 by the American Geophysical Union.

  19. Real-time maritime scene simulation for ladar sensors

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  20. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  1. Senegalese land surface change analysis and biophysical parameter estimation using NOAA AVHRR spectral data

    NASA Technical Reports Server (NTRS)

    Vukovich, Fred M.; Toll, David L.; Kennard, Ruth L.

    1989-01-01

    Surface biophysical estimates were derived from analysis of NOAA Advanced Very High Spectral Resolution (AVHRR) spectral data of the Senegalese area of west Africa. The parameters derived were of solar albedo, spectral visible and near-infrared band reflectance, spectral vegetative index, and ground temperature. Wet and dry linked AVHRR scenes from 1981 through 1985 in Senegal were analyzed for a semi-wet southerly site near Tambacounda and a predominantly dry northerly site near Podor. Related problems were studied to convert satellite derived radiance to biophysical estimates of the land surface. Problems studied were associated with sensor miscalibration, atmospheric and aerosol spatial variability, surface anisotropy of reflected radiation, narrow satellite band reflectance to broad solar band conversion, and ground emissivity correction. The middle-infrared reflectance was approximated with a visible AVHRR reflectance for improving solar albedo estimates. In addition, the spectral composition of solar irradiance (direct and diffuse radiation) between major spectral regions (i.e., ultraviolet, visible, near-infrared, and middle-infrared) was found to be insensitive to changes in the clear sky atmospheric optical depth in the narrow band to solar band conversion procedure. Solar albedo derived estimates for both sites were not found to change markedly with significant antecedent precipitation events or correspondingly from increases in green leaf vegetation density. The bright soil/substrate contributed to a high albedo for the dry related scenes, whereas the high internal leaf reflectance in green vegetation canopies in the near-infrared contributed to high solar albedo for the wet related scenes. The relationship between solar albedo and ground temperature was poor, indicating the solar albedo has little control of the ground temperature. The normalized difference vegetation index (NDVI) and the derived visible reflectance were more sensitive to antecedent rainfall amounts and green vegetation changes than were near-infrared changes. The information in the NDVI related to green leaf density changes primarily was from the visible reflectance. The contribution of the near-infrared reflectance to explaining green vegetation is largely reduced when there is a bright substrate.

  2. Color Infrared view of Houston, TX, USA

    NASA Image and Video Library

    1991-09-18

    This color infrared view of Houston (29.5N, 95.0W) was taken with a dual camera mount. Compare this scene with STS048-78-034 for an analysis of the unique properties of each film type. Comparative tests such as this aids in determining the kinds of information unique to each film system and evaluates and compares photography taken through hazy atmospheres. Infrared film is best at penetrating haze, vegetation detection and producing a sharp image.

  3. Reconstruction of crimes by infrared photography.

    PubMed

    Sterzik, V; Bohnert, M

    2016-09-01

    Whenever blunt or sharp forces are used in a crime, analysis of bloodstain pattern distribution may provide important information for the reconstruction of happenings. Thereby, attention should be paid to both the crime scene and the clothes of everyone involved in the crime. On dark textiles, though, it is difficult or even impossible for the human eye to detect bloodstains because of the low contrast to the background. However, in the near infrared wavelength range, contrast is considerably higher. Many textiles reflect light beyond a wavelength of 830 nm and thus appear light-colored, whereas blood absorbs the light and appears dark. In our studies, a D7000 NIKON reflex camera modified for infrared photography produced high-resolution photographs visualizing even very small spatter stains on dark textiles. The equipment can be used at any crime scene or lab and provides immediately available and interpretable images. Thus, important findings can be obtained at an early stage of police investigations, as two examples (homicide and attempted homicide) illustrate.

  4. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  5. Scene-based nonuniformity correction for focal plane arrays by the method of the inverse covariance form.

    PubMed

    Torres, Sergio N; Pezoa, Jorge E; Hayat, Majeed M

    2003-10-10

    What is to our knowledge a new scene-based algorithm for nonuniformity correction in infrared focal-plane array sensors has been developed. The technique is based on the inverse covariance form of the Kalman filter (KF), which has been reported previously and used in estimating the gain and bias of each detector in the array from scene data. The gain and the bias of each detector in the focal-plane array are assumed constant within a given sequence of frames, corresponding to a certain time and operational conditions, but they are allowed to randomly drift from one sequence to another following a discrete-time Gauss-Markov process. The inverse covariance form filter estimates the gain and the bias of each detector in the focal-plane array and optimally updates them as they drift in time. The estimation is performed with considerably higher computational efficiency than the equivalent KF. The ability of the algorithm in compensating for fixed-pattern noise in infrared imagery and in reducing the computational complexity is demonstrated by use of both simulated and real data.

  6. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System.

    PubMed

    Zhang, Jiarui; Zhang, Yingjie; Chen, Bo

    2017-12-20

    The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.

  7. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  8. Home theater projectors: the next big thing?

    NASA Astrophysics Data System (ADS)

    Chinnock, Christopher B.

    2002-04-01

    The business presentation market has traditionally been the mainstay of the projection business, but as these users find the projectors work well at showing movies at home, interest in the home entertainment market is heating up. The idea of creating a theater environment in the home, complete with big screen projector and quality audio system, is not new. Wealthy patrons have been doing it for years. But can the concept be extended to ordinary living rooms? Many think so. Already pioneers like Sony, InFocus, Toshiba and Plus Vision are offering first generation products - and others will follow. But this market will require projectors that have different performance characteristics than those designed for data projection. In this paper, we will discuss how the requirements for a home theater projector differ from those of a data projector. We will provide updated information on who is doing what in this segment and give some insight into the growth potential.

  9. Realizing the increased potential of an open-system high-definition digital projector design

    NASA Astrophysics Data System (ADS)

    Daniels, Reginald

    1999-05-01

    Modern video projectors are becoming more compact and capable. Various display technologies are very competitive and are delivering higher performance and more compact projectors to market at an ever quickening pace. However the end users are often left with the daunting task of integrating the 'off the self projectors' into a previously existing system. As the projectors become more digitally enhanced, there will be a series of designs, and the digital projector technology matures. The design solutions will be restricted by the state of the art at the time of manufacturing. In order to allow the most growth and performance for a given price, many design decisions will be made and revisited over a period of years or decades. A modular open digital system design concept is indeed a major challenge of the future high definition digital displays for al applications.

  10. Diode Lasers and Light Emitting Diodes Operating at Room Temperature with Wavelengths Above 3 Micrometers

    DTIC Science & Technology

    2011-11-29

    as an active region of mid - infrared LEDs. It should be noted that active region based on interband transition is equally useful for both laser and...IR LED technology for infrared scene projectors”, Dr. E. Golden, Air Force Research Laboratory, Eglin Air Force Base .  “A stable mid -IR, GaSb...multimode lasers. Single spatial mode 3-3.2 J.lm diode lasers were developed. LEDs operate at wavelength above 4 J.lm at RT. Dual color mid - infrared

  11. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    DTIC Science & Technology

    2015-03-01

    SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new

  12. A simple and low-cost structured illumination microscopy using a pico-projector

    NASA Astrophysics Data System (ADS)

    Özgürün, Baturay

    2018-02-01

    Here, development of a low-cost structured illumination microscopy (SIM) based on a pico-projector is presented. The pico-projector consists of independent red, green and blue LEDs that remove need for an external illumination source. Moreover, display element of the pico-projector serves as a pattern generating spatial light modulator. A simple lens group is employed to couple light from the projector to an epi-illumination port of a commercial microscope system. 2D sub SIM images are acquired and synthesized to surpass the diffraction limit using 40x (0.75 NA) objective. Resolution of the reconstructed SIM images is verified with a dye-and-object object and a fixed cell sample.

  13. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  14. Research on techniques for computer three-dimensional simulation of satellites and night sky

    NASA Astrophysics Data System (ADS)

    Yan, Guangwei; Hu, Haitao

    2007-11-01

    To study space attack-defense technology, a simulation of satellites is needed. We design and implement a 3d simulating system of satellites. The satellites are rendered under the Night sky background. The system structure is as follows: one computer is used to simulate the orbital of satellites, the other computers are used to render 3d simulation scene. To get a realistic effect, a three-channel multi-projector display system is constructed. We use MultiGen Creator to construct satellite and star models. We use MultiGen Distributed Vega to render the three-channel scene. There are one master and three slaves. The master controls the three slaves to render three channels separately. To get satellites' positions and attitudes, the master communicates with the satellite orbit simulator based on TCP/IP protocol. Then it calculates the observer's position, the satellites' position, the moon's and the sun's position and transmits the data to the slaves. To get a smooth orbit of target satellites, an orbit prediction method is used. Because the target satellite data packets and the attack satellite data packets cannot keep synchronization in the network, a target satellite dithering phenomenon will occur when the scene is rendered. To resolve this problem, an anti-dithering algorithm is designed. To render Night sky background, a file which stores stars' position and brightness data is used. According to the brightness of each star, the stars are classified into different magnitude. The star model is scaled according to the magnitude. All the stars are distributed on a celestial sphere. Experiments show, the whole system can run correctly, and the frame rate can reach 30Hz. The system can be used in a space attack-defense simulation field.

  15. Multi-Octave Spectral Imaging in the Infrared - A Newly Emerging Approach

    DTIC Science & Technology

    2002-01-01

    as a function of wavelength, that arises from an example scene, and compare this with total noise (also as a function of wavelength). The signal...0.9 emissivity, for the purpose of this estimate of baseline performance. Total noise (in rms electrons) is estimated as a function of wavelength (or...spectral pixel number following the correspondence in Figure 2) from photon noise arising from both scene and optics emission, dark current noise , and

  16. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  17. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  18. 76 FR 66750 - Certain Projectors With Controlled-Angle Optical Retarders, Components Thereof, and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    ... INTERNATIONAL TRADE COMMISSION [DN 2849] Certain Projectors With Controlled-Angle Optical... Re Certain Projectors with Controlled-Angle Optical Retarders, Components Thereof, And Products... complaint. FOR FURTHER INFORMATION CONTACT: James R. Holbein, Secretary to the Commission, U.S...

  19. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  20. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  1. Phoenix, AZ, USA

    NASA Image and Video Library

    1973-06-22

    SL2-03-200 (22 June 1973) --- The city of Phoenix, AZ (33.5N, 112.0W) can be seen in good detail in this color infrared scene. Situated among truck crop agriculture fields, the color infrared photo depicts the vegetated fields as shades of red making the agriculture stand out in this desert environment. To the east, Lake Theodore Roosevelt and dam can be easily seen. Photo credit: NASA

  2. Development of a portable multispectral thermal infrared camera

    NASA Technical Reports Server (NTRS)

    Osterwisch, Frederick G.

    1991-01-01

    The purpose of this research and development effort was to design and build a prototype instrument designated the 'Thermal Infrared Multispectral Camera' (TIRC). The Phase 2 effort was a continuation of the Phase 1 feasibility study and preliminary design for such an instrument. The completed instrument designated AA465 has application in the field of geologic remote sensing and exploration. The AA465 Thermal Infrared Camera (TIRC) System is a field-portable multispectral thermal infrared camera operating over the 8.0 - 13.0 micron wavelength range. Its primary function is to acquire two-dimensional thermal infrared images of user-selected scenes. Thermal infrared energy emitted by the scene is collected, dispersed into ten 0.5 micron wide channels, and then measured and recorded by the AA465 System. This multispectral information is presented in real time on a color display to be used by the operator to identify spectral and spatial variations in the scenes emissivity and/or irradiance. This fundamental instrument capability has a wide variety of commercial and research applications. While ideally suited for two-man operation in the field, the AA465 System can be transported and operated effectively by a single user. Functionally, the instrument operates as if it were a single exposure camera. System measurement sensitivity requirements dictate relatively long (several minutes) instrument exposure times. As such, the instrument is not suited for recording time-variant information. The AA465 was fabricated, assembled, tested, and documented during this Phase 2 work period. The detailed design and fabrication of the instrument was performed during the period of June 1989 to July 1990. The software development effort and instrument integration/test extended from July 1990 to February 1991. Software development included an operator interface/menu structure, instrument internal control functions, DSP image processing code, and a display algorithm coding program. The instrument was delivered to NASA in March 1991. Potential commercial and research uses for this instrument are in its primary application as a field geologists exploration tool. Other applications have been suggested but not investigated in depth. These are measurements of process control in commercial materials processing and quality control functions which require information on surface heterogeneity.

  3. Evaluating the effect of spatial subsetting on subpixel unmixing methodology applied to ASTER over a hydrothermally altered terrain

    NASA Astrophysics Data System (ADS)

    Ayoobi, Iman; Tangestani, Majid H.

    2017-10-01

    This study investigates the effect of spatial subsets of Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) L1B visible-near infrared and short wave-infrared (VNIR-SWIR) data on matched filtering results at the central part of Kerman magmatic arc, where abundant porphyry copper deposits exist. The matched filtering (MF) procedure was run separately at sites containing hydrothermal minerals such as sericite, kaolinite, chlorite, and jarosite to map the abundances of these minerals on spatial subsets containing 100, 75, 50, and 25 percent of the original scene. Results were evaluated by comparing the matched filtering scores with the mineral abundances obtained by semi-quantitative XRD analysis of corresponding field samples. It was concluded that MF method should be applied to the whole scene prior to any data subsetting.

  4. 3D modeling of satellite spectral images, radiation budget and energy budget of urban landscapes

    NASA Astrophysics Data System (ADS)

    Gastellu-Etchegorry, J. P.

    2008-12-01

    DART EB is a model that is being developed for simulating the 3D (3 dimensional) energy budget of urban and natural scenes, possibly with topography and atmosphere. It simulates all non radiative energy mechanisms (heat conduction, turbulent momentum and heat fluxes, water reservoir evolution, etc.). It uses DART model (Discrete Anisotropic Radiative Transfer) for simulating radiative mechanisms: 3D radiative budget of 3D scenes and their remote sensing images expressed in terms of reflectance or brightness temperature values, for any atmosphere, wavelength, sun/view direction, altitude and spatial resolution. It uses an innovative multispectral approach (ray tracing, exact kernel, discrete ordinate techniques) over the whole optical domain. This paper presents two major and recent improvements of DART for adapting it to urban canopies. (1) Simulation of the geometry and optical characteristics of urban elements (houses, etc.). (2) Modeling of thermal infrared emission by vegetation and urban elements. The new DART version was used in the context of the CAPITOUL project. For that, districts of the Toulouse urban data base (Autocad format) were translated into DART scenes. This allowed us to simulate visible, near infrared and thermal infrared satellite images of Toulouse districts. Moreover, the 3D radiation budget was used by DARTEB for simulating the time evolution of a number of geophysical quantities of various surface elements (roads, walls, roofs). Results were successfully compared with ground measurements of the CAPITOUL project.

  5. AgRISTARS: Early warning and crop condition assessment. Plant cover, soil temperature, freeze, water stress, and evapotranspiration conditions

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L. (Principal Investigator); Nixon, P. R.; Gausman, H. W.; Namken, L. N.; Leamer, R. W.; Richardson, A. J.

    1981-01-01

    Emissive (10.5 to 12.5 microns) and reflective (0.55 to 1.1 microns) data for ten day scenes and infrared data for six night scenes of southern Texas were analyzed for plant cover, soil temperature, freeze, water stress, and evapotranspiration. Heat capacity mapping mission radiometric temperatures were: within 2 C of dewpoint temperatures, significantly correlated with variables important in evapotranspiration, and related to freeze severity and planting depth soil temperatures.

  6. Growing Crystals on the Ceiling.

    ERIC Educational Resources Information Center

    Christman, Robert A.

    1980-01-01

    Described is a method of studying growing crystals in a classroom utilizing a carrousel projector standing vertically. A saturated salt solution is placed on a slide on the lens of the projector and the heat from the projector causes the water to evaporate and salt to crystalize. (Author/DS)

  7. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, Donald P.

    1998-01-01

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.

  8. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, D.P.

    1998-05-12

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.

  9. Spatial and temporal variability of hyperspectral signatures of terrain

    NASA Astrophysics Data System (ADS)

    Jones, K. F.; Perovich, D. K.; Koenig, G. G.

    2008-04-01

    Electromagnetic signatures of terrain exhibit significant spatial heterogeneity on a range of scales as well as considerable temporal variability. A statistical characterization of the spatial heterogeneity and spatial scaling algorithms of terrain electromagnetic signatures are required to extrapolate measurements to larger scales. Basic terrain elements including bare soil, grass, deciduous, and coniferous trees were studied in a quasi-laboratory setting using instrumented test sites in Hanover, NH and Yuma, AZ. Observations were made using a visible and near infrared spectroradiometer (350 - 2500 nm) and hyperspectral camera (400 - 1100 nm). Results are reported illustrating: i) several difference scenes; ii) a terrain scene time series sampled over an annual cycle; and iii) the detection of artifacts in scenes. A principal component analysis indicated that the first three principal components typically explained between 90 and 99% of the variance of the 30 to 40-channel hyperspectral images. Higher order principal components of hyperspectral images are useful for detecting artifacts in scenes.

  10. LCD Projectors: An Evaluation of Features and Utilization for Educators.

    ERIC Educational Resources Information Center

    Fawson, Curtis E.

    1990-01-01

    Describes liquid crystal display (LCD) projectors and discusses their use in educational settings. Highlights include rear screen projection; LCD projectors currently available and the number of pixel elements in each; and examples of instructional applications, including portable setups, and use with videocassette recorders (VCRs), computers, and…

  11. Digital Projectors Demystified

    ERIC Educational Resources Information Center

    Careless, James

    2007-01-01

    Digital projectors are becoming a common sight in U.S. schools. A projector can serve many roles, from letting teachers give tours of educational Web sites to having students present their projects to the entire class. With this trend come questions: Which projection technology is the most cost-effective? Which requires the least maintenance? How…

  12. Market trends in the projection display industry

    NASA Astrophysics Data System (ADS)

    Dash, Sweta

    2001-03-01

    The projection display industry represents a multibillion- dollar market that includes four distinct technologies. High-volume consumer products and high-value business products drive the market, with different technologies being used in different application markets. The consumer market is dominated by rear CRT technology, especially in the projection TV segment. Rear LCD (liquid crystal display), MEMS/DLP (or Digital Light Processing TM) and LCOS (Liquid-crystal-on-silicon) TVs are slowly emerging as future competitors to rear CRT projectors. Front CRT projectors are also facing challenges from LCD and DLP technology for the home theater market while the business market is completely dominated by front LCD and DLP technology. Three-chip DLP projectors have replaced liquid crystal light valves in large venue applications where projectors have higher light output requirements. In recent years front LCD and LCOS projectors have been increasingly competing with 3-chip DLP projectors especially at the low end of the large venue application market. Within the next five years the projection market will experience very fast growth. Sales and presentation applications, which are the fastest growing applications in the business market, will continue to be the major driving force for the growth for front projectors, and the shift in the consumer market to digital and HDTV products will drive the rear projection market.

  13. A study to explore the use of orbital remote sensing to determine native arid plant distribution. [Arizona

    NASA Technical Reports Server (NTRS)

    Mcginnies, W. G.; Haase, E. F. (Principal Investigator); Musick, H. B. (Compiler)

    1973-01-01

    The author has identified the following significant results. Ground truth spectral signature data for various types of scenes, including ground with and without annuals, and various shrubs, were collected. When these signature data are plotted with infrared (MSS band 6 or 7) reflectivity on one axis and red (MSS band 5) reflectivity on the other axis, clusters of data from the various types of scenes are distinct. This method of expressing spectral signature data appears to be more useful for distinguishing types of scenes than a simple infrared to red reflectivity ration. Large areas of varnished desert pavement are visible and mappable on ERTS-1 and high altitude aircraft imagery. A large scale vegetation pattern was found to be correlated with the presence of the desert pavement. The large scale correlation was used in mapping the vegetation of the area. It was found that a distinctive soil type was associated with the presence of the varnished desert pavement. The high salinity and exchangeable sodium percentage of this soil type provide a basis for the explanation of both the large scale and small scale vegetation pattern.

  14. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  15. A HWIL test facility of infrared imaging laser radar using direct signal injection

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Lu, Wei; Wang, Chunhui; Wang, Qi

    2005-01-01

    Laser radar has been widely used these years and the hardware-in-the-loop (HWIL) testing of laser radar become important because of its low cost and high fidelity compare with On-the-Fly testing and whole digital simulation separately. Scene generation and projection two key technologies of hardware-in-the-loop testing of laser radar and is a complicated problem because the 3D images result from time delay. The scene generation process begins with the definition of the target geometry and reflectivity and range. The real-time 3D scene generation computer is a PC based hardware and the 3D target models were modeled using 3dsMAX. The scene generation software was written in C and OpenGL and is executed to extract the Z-buffer from the bit planes to main memory as range image. These pixels contain each target position x, y, z and its respective intensity and range value. Expensive optical injection technologies of scene projection such as LDP array, VCSEL array, DMD and associated scene generation is ongoing. But the optical scene projection is complicated and always unaffordable. In this paper a cheaper test facility was described that uses direct electronic injection to provide rang images for laser radar testing. The electronic delay and pulse shaping circuits inject the scenes directly into the seeker's signal processing unit.

  16. Maximum Likelihood Detection of Electro-Optic Moving Targets

    DTIC Science & Technology

    1992-01-16

    indicates intensity. The Infrared Measurements Sensor (IRMS) is a scanning sensor that collects both long wave- length infrared ( LWIR , 8 to 12 fim...moving clutter. Nonstationary spatial statistics correspond to the nonuniform intensity of the background scene. An equivalent viewpoint is to...Figure 6 compares theory and experiment for 10 frames of the Longjump LWIR data obtained from the IRMS scanning sensor, which is looking at a background

  17. Modelling Middle Infrared Thermal Imagery from Observed or Simulated Active Fire

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Gastellu-Etchegorry, J. P.; Mell, W.; Johnston, J.; Filippi, J. B.

    2016-12-01

    The Fire Radiative Power (FRP) is used in the atmospheric and fire communities to estimate fire emission. For example, the current version of the emission inventory GFAS is using FRP observation from the MODIS sensors to derive daily global distribution of fire emissions. Although the FRP product is widely accepted, most of its theoretical justifications are still based on small scale burns. When up-scaling to large fires effects of view angle, canopy cover, or smoke absorption are still unknown. To cover those questions, we are building a system based on the DART radiative transfer model to simulate the middle infrared radiance emitted by a propagating fire front and propagating in the surrounding scene made of ambient vegetation and plume aerosols. The current version of the system was applied to fire ranging from a 1m2 to 7ha. The 3D fire scene used as input in DART is made of the flame, the vegetation (burnt and unburnt), and the plume. It can be either set up from [i] 3D physical based model scene (ie WFDS, mainly applicable for small scale burn), [ii] coupled 2D fire spread - atmospheric models outputs (eg ForeFire-MesoNH) or [iii] derived from thermal imageries observations (here plume effects are not considered). In the last two cases, as the complexity of physical processes occurring in the flame (in particular soot formation and emission) is not to solved, the flames structures are parameterized with (a) temperature and soot concentration based on empirical derived profiles and (b) 3D triangular shape hull interpolated at the fire front location. Once the 3D fire scene is set up, DART is then used to render thermal imageries in the middle infrared. Using data collected from burns conducted at different scale, the modelled thermal imageries are compared against observations, and effects of view angle are discussed.

  18. Active modulation of laser coded systems using near infrared video projection system based on digital micromirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.

    2016-02-01

    Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.

  19. The Physics of the Data Projector

    ERIC Educational Resources Information Center

    Reid, Alastair

    2008-01-01

    Data projectors have become a common sight in school classrooms, often in conjunction with an interactive whiteboard. Long periods of continuous use coupled with the transfer of a large amount of thermal energy from the projector's bulb means that they frequently break down, often in such a manner that they become uneconomic to repair. In this…

  20. Overhead Projector Spectrum of Polymethine Dye: A Physical Chemistry Demonstration

    NASA Astrophysics Data System (ADS)

    Solomon, Sally; Hur, Chinhyu

    1995-08-01

    The position of the predominant peak of 1,1'-diethyl-4,4'-cyanine iodide is measured in class using an overhead projector spectrometer, then predicted using the model of a particle-in a one dimensional box. The calculated wavelength is in excellent agreement with the wavelength estimated from the overhead projector spectroscopy experiment.

  1. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  2. Recent Experiments Conducted with the Wide-Field Imaging Interferometry Testbed (WIIT)

    NASA Technical Reports Server (NTRS)

    Leisawitz, David T.; Juanola-Parramon, Roser; Bolcar, Matthew; Iacchetta, Alexander S.; Maher, Stephen F.; Rinehart, Stephen A.

    2016-01-01

    The Wide-field Imaging Interferometry Testbed (WIIT) was developed at NASA's Goddard Space Flight Center to demonstrate and explore the practical limitations inherent in wide field-of-view double Fourier (spatio-spectral) interferometry. The testbed delivers high-quality interferometric data and is capable of observing spatially and spectrally complex hyperspectral test scenes. Although WIIT operates at visible wavelengths, by design the data are representative of those from a space-based far-infrared observatory. We used WIIT to observe a calibrated, independently characterized test scene of modest spatial and spectral complexity, and an astronomically realistic test scene of much greater spatial and spectral complexity. This paper describes the experimental setup, summarizes the performance of the testbed, and presents representative data.

  3. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  4. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  5. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array.

    PubMed

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-11-10

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above.

  6. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array

    PubMed Central

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-01-01

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above. PMID:27834893

  7. Infrared polarimetric sensing of oil on water

    NASA Astrophysics Data System (ADS)

    Chenault, David B.; Vaden, Justin P.; Mitchell, Douglas A.; DeMicco, Erik D.

    2016-10-01

    Infrared polarimetry is an emerging sensing modality that offers the potential for significantly enhanced contrast in situations where conventional thermal imaging falls short. Polarimetric imagery leverages the different polarization signatures that result from material differences, surface roughness quality, and geometry that are frequently different from those features that lead to thermal signatures. Imaging of the polarization in a scene can lead to enhanced understanding, particularly when materials in a scene are at thermal equilibrium. Polaris Sensor Technologies has measured the polarization signatures of oil on water in a number of different scenarios and has shown significant improvement in detection through the contrast improvement offered by polarimetry. The sensing improvement offers the promise of automated detection of oil spills and leaks for routine monitoring and accidents with the added benefit of being able to continue monitoring at night. In this paper, we describe the instrumentation, and the results of several measurement exercises in both controlled and uncontrolled conditions.

  8. Development of an Infrared Remote Sensing System for Continuous Monitoring of Stromboli Volcano

    NASA Astrophysics Data System (ADS)

    Harig, R.; Burton, M.; Rausch, P.; Jordan, M.; Gorgas, J.; Gerhard, J.

    2009-04-01

    In order to monitor gases emitted by Stromboli volcano in the Eolian archipelago, Italy, a remote sensing system based on Fourier-transform infrared spectroscopy has been developed and installed on the summit of Stromboli volcano. Hot rocks and lava are used as sources of infrared radiation. The system is based on an interferometer with a single detector element in combination with an azimuth-elevation scanning mirror system. The mirror system is used to align the field of view of the instrument. In addition, the system is equipped with an infrared camera. Two basic modes of operation have been implemented: The user may use the infrared image to align the system to a vent that is to be examined. In addition, the scanning system may be used for (hyperspectral) imaging of the scene. In this mode, the scanning mirror is set sequentially move to all positions within a region of interest which is defined by the operator using the image generated from the infrared camera. The spectral range used for the measurements is 1600 - 4200 cm-1 allowing the quantification of many gases such as CO, CO2, SO2, and HCl. The spectral resolution is 0.5 cm-1. In order to protect the optical, mechanical and electrical parts of the system from the volcanic gases, all components are contained in a gas-tight aluminium housing. The system is controlled via TCP/IP (data transfer by WLAN), allowing the user to operate it from a remote PC. The infrared image of the scene and measured spectra are transferred to and displayed by a remote PC at INGV or TUHH in real-time. However, the system is capable of autonomous operation on the volcano, once a measurement has been started. Measurements are stored by an internal embedded PC.

  9. 77 FR 34967 - Notice of Issuance of Final Determination Concerning Digital Projectors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ... six function tests in the ``post test'', the EDID firmware is programmed into the digital projectors... tests and consists of at least 97 steps taking approximately 137.8 minutes. After the whole projector is... includes 11 kinds of function tests and consists of at least 97 steps which will take approximately 11...

  10. 76 FR 30180 - Notice of Issuance of Final Determination Concerning Pocket Projectors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ..., and adhering by electrostatic means. The finished projector will undergo a series of tests in Taiwan: A pre-test, a run-in test, and a function test. The pre-test consists of: ensuring that the... the projector is turned on (developed in Taiwan), (2) test patterns that are projected on the screen...

  11. The Development of a Computer Controlled Super 8 Motion Picture Projector.

    ERIC Educational Resources Information Center

    Reynolds, Eldon J.

    Instructors in Child Development at the University of Texas at Austin selected sound motion pictures as the most effective medium to simulate the observation of children in nursery laboratories. A computer controlled projector was designed for this purpose. An interface and control unit controls the Super 8 projector from a time-sharing computer…

  12. Bulletin Boards and 3-D Showcases That Capture Them with Pizzazz. Volume 2.

    ERIC Educational Resources Information Center

    Hawthorne, Karen; Gibson, Jane E.

    This book features bulletin boards and showcases, designed to motivate readers to use the library, that have been the favorites of students in grades 7-12. Chapter 1 covers getting started, including principles of design, tools of design, background, borders, letters, leaves and flowers, opaque projector/overhead projector/document projector,…

  13. Principle component analyses of questionnaires measuring individual differences in synaesthetic phenomenology.

    PubMed

    Anderson, Hazel P; Ward, Jamie

    2015-05-01

    Questionnaires have been developed for categorising grapheme-colour synaesthetes into two sub-types based on phenomenology: associators and projectors. The general approach has been to assume a priori the existence of two sub-types on a single dimension (with endpoints as projector and associator) rather than explore, in a data-driven fashion, other possible models. We collected responses from 175 grapheme-colour synaesthetes on two questionnaires, the Illustrated Synaesthetic Experience Questionnaire (Skelton, Ludwig, & Mohr, 2009) and Rouw and Scholte's (2007) Projector-Associator Questionnaire. After Principle Component Analysis both questionnaires were comprised of two factors which coincide with the projector/associator distinction. This suggests that projectors and associators are not opposites of each other, but separate dimensions of experience (e.g. some synaesthetes claim to be both, others claim to be neither). The revised questionnaires provide a useful tool for researchers and insights into the phenomenology of synaesthesia. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Multi-projector auto-calibration and placement optimization for non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong

    2015-10-01

    Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.

  15. Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.

    PubMed

    Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang

    2017-10-16

    Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.

  16. Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model

    PubMed Central

    Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang

    2017-01-01

    Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295

  17. Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, Peter S.; Jasinski, Michael F.

    1988-01-01

    This work focuses on the characterization of natural, spatially variable, semivegetated landscapes using a linear, stochastic, canopy-soil reflectance model. A first application of the model was the investigation of the effects of subpixel and regional variability of scenes on the shape and structure of red-infrared scattergrams. Additionally, the model was used to investigate the inverse problem, the estimation of subpixel vegetation cover, given only the scattergrams of simulated satellite scale multispectral scenes. The major aspects of that work, including recent field investigations, are summarized.

  18. Demonstration of KHILS two-color IR projection capability

    NASA Astrophysics Data System (ADS)

    Jones, Lawrence E.; Coker, Jason S.; Garbo, Dennis L.; Olson, Eric M.; Murrer, Robert Lee, Jr.; Bergin, Thomas P.; Goldsmith, George C., II; Crow, Dennis R.; Guertin, Andrew W.; Dougherty, Michael; Marler, Thomas M.; Timms, Virgil G.

    1998-07-01

    For more than a decade, there has been considerable discussion about using different IR bands for the detection of low contrast military targets. Theory predicts that a target can have little to no contrast against the background in one IR band while having a discernible signature in another IR band. A significant amount of effort has been invested towards establishing hardware that is capable of simultaneously imaging in two IR bands to take advantage of this phenomenon. Focal plane arrays (FPA) are starting to materialize with this simultaneous two-color imaging capability. The Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) team of the Air Force Research Laboratory and the Guided Weapons Evaluation Facility (GWEF), both at Eglin AFB, FL, have spent the last 10 years developing the ability to project dynamic IR scenes to imaging IR seekers. Through the Wideband Infrared Scene Projector (WISP) program, the capability to project two simultaneous IR scenes to a dual color seeker has been established at KHILS. WISP utilizes resistor arrays to produce the IR energy. Resistor arrays are not ideal blackbodies. The projection of two IR colors with resistor arrays, therefore, requires two optically coupled arrays. This paper documents the first demonstration of two-color simultaneous projection at KHILS. Agema cameras were used for the measurements. The Agema's HgCdTe detector has responsivity from 4 to 14 microns. A blackbody and two IR filters (MWIR equals 4.2 t 7.4 microns, LWIR equals 7.7 to 13 microns) were used to calibrate the Agema in two bands. Each filter was placed in front of the blackbody one at a time, and the temperature of the blackbody was stepped up in incremental amounts. The output counts from the Agema were recorded at each temperature. This calibration process established the radiance to Agema output count curves for the two bands. The WISP optical system utilizes a dichroic beam combiner to optically couple the two resistor arrays. The transmission path of the beam combiner provided the LWIR (6.75 to 12 microns), while the reflective path produced the MWIR (3 to 6.5 microns). Each resistor array was individually projected into the Agema through the beam combiner at incremental output levels. Once again the Agema's output counts were recorded at each resistor array output level. These projections established the resistor array output to Agema count curves for the MWIR and LWIR resistor arrays. Using the radiance to Agema counts curves, the MWIR and LWIR resistor array output to radiance curves were established. With the calibration curves established, a two-color movie was projected and compared to the generated movie radiance values. By taking care to correctly account for the spectral qualities of the Agema camera, the calibration filters, and the diachroic beam combiner, the projections matched the theoretical calculations. In the near future, a Lockheed- Martin Multiple Quantum Well camera with true two-color IR capability will be tested.

  19. The Use of a Computer-Controlled Random Access Slide Projector for Rapid Information Display.

    ERIC Educational Resources Information Center

    Muller, Mark T.

    A 35mm random access slide projector operated in conjunction with a computer terminal was adapted to meet the need for a more rapid and complex graphic display mechanism than is currently available with teletypewriter terminals. The model projector can be operated manually to provide for a maintenance checkout of the electromechanical system.…

  20. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  1. Spectral Analysis of the Primary Flight Focal Plane Arrays for the Thermal Infrared Sensor

    NASA Technical Reports Server (NTRS)

    Montanaro, Matthew; Reuter, Dennis C.; Markham, Brian L.; Thome, Kurtis J.; Lunsford, Allen W.; Jhabvala, Murzy D.; Rohrbach, Scott O.; Gerace, Aaron D.

    2011-01-01

    Thermal Infrared Sensor (TIRS) is a (1) New longwave infrared (10 - 12 micron) sensor for the Landsat Data Continuity Mission, (2) 185 km ground swath; 100 meter pixel size on ground, (3) Pushbroom sensor configuration. Issue of Calibration are: (1) Single detector -- only one calibration, (2) Multiple detectors - unique calibration for each detector -- leads to pixel-to-pixel artifacts. Objectives are: (1) Predict extent of residual striping when viewing a uniform blackbody target through various atmospheres, (2) Determine how different spectral shapes affect the derived surface temperature in a realistic synthetic scene.

  2. Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; O'Toole, Brian E.

    1998-07-01

    An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.

  3. Contrast performance modeling of broadband reflective imaging systems with hypothetical tunable filter fore-optics

    NASA Astrophysics Data System (ADS)

    Hodgkin, Van A.

    2015-05-01

    Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.

  4. Reduction of Radiometric Miscalibration—Applications to Pushbroom Sensors

    PubMed Central

    Rogaß, Christian; Spengler, Daniel; Bochow, Mathias; Segl, Karl; Lausch, Angela; Doktor, Daniel; Roessner, Sigrid; Behling, Robert; Wetzel, Hans-Ulrich; Kaufmann, Hermann

    2011-01-01

    The analysis of hyperspectral images is an important task in Remote Sensing. Foregoing radiometric calibration results in the assignment of incident electromagnetic radiation to digital numbers and reduces the striping caused by slightly different responses of the pixel detectors. However, due to uncertainties in the calibration some striping remains. This publication presents a new reduction framework that efficiently reduces linear and nonlinear miscalibrations by an image-driven, radiometric recalibration and rescaling. The proposed framework—Reduction Of Miscalibration Effects (ROME)—considering spectral and spatial probability distributions, is constrained by specific minimisation and maximisation principles and incorporates image processing techniques such as Minkowski metrics and convolution. To objectively evaluate the performance of the new approach, the technique was applied to a variety of commonly used image examples and to one simulated and miscalibrated EnMAP (Environmental Mapping and Analysis Program) scene. Other examples consist of miscalibrated AISA/Eagle VNIR (Visible and Near Infrared) and Hawk SWIR (Short Wave Infrared) scenes of rural areas of the region Fichtwald in Germany and Hyperion scenes of the Jalal-Abad district in Southern Kyrgyzstan. Recovery rates of approximately 97% for linear and approximately 94% for nonlinear miscalibrated data were achieved, clearly demonstrating the benefits of the new approach and its potential for broad applicability to miscalibrated pushbroom sensor data. PMID:22163960

  5. Laboratory Measurement of the Brighter-fatter Effect in an H2RG Infrared Detector

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Smith, R.; Huff, E.; Rhodes, J.

    2018-06-01

    The “brighter-fatter” (BF) effect is a phenomenon—originally discovered in charge coupled devices—in which the size of the detector point-spread function (PSF) increases with brightness. We present, for the first time, laboratory measurements demonstrating the existence of the effect in a Hawaii-2RG HgCdTe near-infrared (NIR) detector. We use JPL’s Precision Projector Laboratory, a facility for emulating astronomical observations with UV/VIS/NIR detectors, to project about 17,000 point sources onto the detector to stimulate the effect. After calibrating the detector for nonlinearity with flat-fields, we find evidence that charge is nonlinearly shifted from bright pixels to neighboring pixels during exposures of point sources, consistent with the existence of a BF-type effect. NASAs Wide Field Infrared Survey Telescope (WFIRST) will use similar detectors to measure weak gravitational lensing from the shapes of hundreds of million of galaxies in the NIR. The WFIRST PSF size must be calibrated to ≈0.1% to avoid biased inferences of dark matter and dark energy parameters; therefore further study and calibration of the BF effect in realistic images will be crucial.

  6. Real-time interactive projection system based on infrared structured-light method

    NASA Astrophysics Data System (ADS)

    Qiao, Xiaorui; Zhou, Qian; Ni, Kai; He, Liang; Wu, Guanhao; Mao, Leshan; Cheng, Xuemin; Ma, Jianshe

    2012-11-01

    Interactive technologies have been greatly developed in recent years, especially in projection field. However, at present, most interactive projection systems are based on special designed interactive pens or whiteboards, which is inconvenient and limits the improvement of user experience. In this paper, we introduced our recent progress on theoretically modeling a real-time interactive projection system. The system permits the user to easily operate or draw on the projection screen directly by fingers without any other auxiliary equipment. The projector projects infrared striping patterns onto the screen and the CCD captures the deformational image. We resolve the finger's position and track its movement by processing the deformational image in real-time. A new way to determine whether the finger touches the screen is proposed. The first deformational fringe on the fingertip and the first fringe at the finger shadow are the same one. The correspondence is obtained, so the location parameters can be decided by triangulation. The simulation results are given, and errors are analyzed.

  7. A portable low-cost 3D point cloud acquiring method based on structure light

    NASA Astrophysics Data System (ADS)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  8. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  9. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  10. New technologies for HWIL testing of WFOV, large-format FPA sensor systems

    NASA Astrophysics Data System (ADS)

    Fink, Christopher

    2016-05-01

    Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.

  11. Relative spectral response corrected calibration inter-comparison of S-NPP VIIRS and Aqua MODIS thermal emissive bands

    NASA Astrophysics Data System (ADS)

    Efremova, Boryana; Wu, Aisheng; Xiong, Xiaoxiong

    2014-09-01

    The S-NPP Visible Infrared Imaging Radiometer Suite (VIIRS) instrument is built with strong heritage from EOS MODIS, and has very similar thermal emissive bands (TEB) calibration algorithm and on-board calibrating source - a V-grooved blackbody. The calibration of the two instruments can be assessed by comparing the brightness temperatures retrieved from VIIRS and Aqua MODIS simultaneous nadir observations (SNO) from their spectrally matched TEB. However, even though the VIIRS and MODIS bands are similar there are still relative spectral response (RSR) differences and thus some differences in the retrieved brightness temperatures are expected. The differences depend on both the type and the temperature of the observed scene, and contribute to the bias and the scatter of the comparison. In this paper we use S-NPP Cross-track Infrared Sounder (CrIS) data taken simultaneously with the VIIRS data to derive a correction for the slightly different spectral coverage of VIIRS and MODIS TEB bands. An attempt to correct for RSR differences is also made using MODTRAN models, computed with physical parameters appropriate for each scene, and compared to the value derived from actual CrIS spectra. After applying the CrIS-based correction for RSR differences we see an excellent agreement between the VIIRS and Aqua MODIS measurements in the studied band pairs M13-B23, M15-B31, and M16- B32. The agreement is better than the VIIRS uncertainty at cold scenes, and improves with increasing scene temperature up to about 290K.

  12. Uncooled long-wave infrared hyperspectral imaging

    NASA Technical Reports Server (NTRS)

    Lucey, Paul G. (Inventor)

    2006-01-01

    A long-wave infrared hyperspectral sensor device employs a combination of an interferometer with an uncooled microbolometer array camera to produce hyperspectral images without the use of bulky, power-hungry motorized components, making it suitable for UAV vehicles, small mobile platforms, or in extraterrestrial environments. The sensor device can provide signal-to-noise ratios near 200 for ambient temperature scenes with 33 wavenumber resolution at a frame rate of 50 Hz, with higher results indicated by ongoing component improvements.

  13. ASTER Waves

    NASA Image and Video Library

    2000-10-06

    The pattern on the right half of this image of the Bay of Bengal is the result of two opposing wave trains colliding. This ASTER sub-scene, acquired on March 29, 2000, covers an area 18 kilometers (13 miles) wide and 15 kilometers (9 miles) long in three bands of the reflected visible and infrared wavelength region. The visible and near-infrared bands highlight surface waves due to specular reflection of sunlight off of the wave faces. http://photojournal.jpl.nasa.gov/catalog/PIA02662

  14. Orthogonal polynomial projectors for the Projector Augmented Wave (PAW) formalism.

    NASA Astrophysics Data System (ADS)

    Holzwarth, N. A. W.; Matthews, G. E.; Tackett, A. R.; Dunning, R. B.

    1998-03-01

    The PAW method for density functional electronic structure calculations developed by Blöchl(Phys. Rev. B 50), 17953 (1994) and also used by our group(Phys. Rev. B 55), 2005 (1997) has numerical advantages of a pseudopotential technique while retaining the physics of an all-electron formalism. We describe a new method for generating the necessary set of atom-centered projector and basis functions, based on choosing the projector functions from a set of orthogonal polynomials multiplied by a localizing weight factor. Numerical benefits of the new scheme result from having direct control of the shape of the projector functions and from the use of a simple repulsive local potential term to eliminate ``ghost state" problems, which can haunt calculations of this kind. We demonstrate the method by calculating the cohesive energies of CaF2 and Mo and the density of states of CaMoO4 which shows detailed agreement with LAPW results over a 66 eV range of energy including upper core, valence, and conduction band states.

  15. Non-moving Hadamard matrix diffusers for speckle reduction in laser pico-projectors

    NASA Astrophysics Data System (ADS)

    Thomas, Weston; Middlebrook, Christopher

    2014-12-01

    Personal electronic devices such as cell phones and tablets continue to decrease in size while the number of features and add-ons keep increasing. One particular feature of great interest is an integrated projector system. Laser pico-projectors have been considered, but the technology has not been developed enough to warrant integration. With new advancements in diode technology and MEMS devices, laser-based projection is currently being advanced for pico-projectors. A primary problem encountered when using a pico-projector is coherent interference known as speckle. Laser speckle can lead to eye irritation and headaches after prolonged viewing. Diffractive optical elements known as diffusers have been examined as a means to lower speckle contrast. This paper presents a binary diffuser known as a Hadamard matrix diffuser. Using two static in-line Hadamard diffusers eliminates the need for rotation or vibration of the diffuser for temporal averaging. Two Hadamard diffusers were fabricated and contrast values measured showing good agreement with theory and simulated values.

  16. Cross-sensor comparisons between Landsat 5 TM and IRS-P6 AWiFS and disturbance detection using integrated Landsat and AWiFS time-series images

    USGS Publications Warehouse

    Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew

    2013-01-01

    Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.

  17. Design and characterization of an ultraresolution seamlessly tiled display for data visualization

    NASA Astrophysics Data System (ADS)

    Bordes, Nicole; Bleha, William P.; Pailthorpe, Bernard

    2003-09-01

    The demand for more pixels in digital displays is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers one solution to augment the pixel capacity of a display. However problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. In this paper we present the results obtained on a desktop size tiled projector array of three D-ILA projectors sharing a common illumination source. The composite image on a 3 x 1 array, is 3840 by 1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact and can fit in a normal room or laboratory. A fiber optic beam splitting system and a single set of red, green and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted individually to set or change characteristics such as contrast, brightness or gamma curves. The projectors were matched carefully and photometric variations were corrected, leading to a seamless tiled image. Photometric measurements were performed to characterize the display and losses through the optical paths, and are reported here. This system is driven by a small PC computer cluster fitted with graphics cards and is running Linux. The Chromium API can be used for tiling graphics tiles across the display and interfacing to users' software applications. There is potential for scaling the design to accommodate larger arrays, up to 4x5 projectors, increasing display system capacity to 50 Megapixels. Further increases, beyond 100 Megapixels can be anticipated with new generation D-ILA chips capable of projecting QXGA (2k x 1.5k), with ongoing evolution as QUXGA (4k x 2k) becomes available.

  18. MONET: multidimensional radiative cloud scene model

    NASA Astrophysics Data System (ADS)

    Chervet, Patrick

    1999-12-01

    All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.

  19. Texture generation for use in synthetic infrared scenes

    NASA Astrophysics Data System (ADS)

    Ota, Clem Z.; Rollins, John M.; Bleiweiss, Max P.

    1996-06-01

    In the process of creating synthetic scenes for use in simulations/visualizations, texture is used as a surrogate to 'high' spatial definition. For example, if one were to measure the location of every blade of grass and all of the characteristics of each blade of grass in a lawn, then in the process of composing a scene of the lawn, it would be expected that the result would appear 'real;' however, because this process is excruciatingly laborious, various techniques have been devised to place the required details in the scene through the use of texturing. Experience gained during the recent Smart Weapons Operability Enhancement Joint Test and Evaluation (SWOE JT&E) has shown the need for higher fidelity texturing algorithms and a better parameterization of those that are in use. In this study, four aspects of the problem have been analyzed: texture extraction, texture insertion, texture metrics, and texture creation algorithms. The results of extracting real texture from an image, measuring it with a variety of metrics, and generating similar texture with three different algorithms is presented. These same metrics can be used to define clutter and to make comparisons between 'real' and synthetic (or artificial) scenes in an objective manner.

  20. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  1. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  2. The visible, near-infrared and short wave infrared channels of the EarthCARE multi-spectral imager

    NASA Astrophysics Data System (ADS)

    Doornink, J.; de Goeij, B.; Marinescu, O.; Meijer, E.; Vink, R.; van Werkhoven, W.; van't Hof, A.

    2017-11-01

    The EarthCARE satellite mission objective is the observation of clouds and aerosols from low Earth orbit. The payload will include active remote sensing instruments being the W-band Cloud Profiling Radar (CPR) and the ATLID LIDAR. These are supported by the passive instruments Broadband Radiometer (BBR) and the Multispectral Imager (MSI) providing the radiometric and spatial context of the ground scene being probed. The MSI will form Earth images over a swath width of 150 km; it will image the Earth atmosphere in 7 spectral bands. The MSI instrument consists of two parts: the Visible, Near infrared and Short wave infrared (VNS) unit, and the Thermal InfraRed (TIR) unit. Subject of this paper is the VNS unit. In the VNS optical unit, the ground scene is imaged in four spectral bands onto four linear detectors via separate optical channels. Driving requirements for the VNS instrument performance are the spectral sensitivity including out-of-band rejection, the MTF, co-registration and the inter-channel radiometric accuracy. The radiometric accuracy performance of the VNS is supported by in-orbit calibration, in which direct solar radiation is fed into the instrument via a set of quasi volume diffusers. The compact optical concept with challenging stability requirements together with the strict thermal constraints have led to a sophisticated opto-mechanical design. This paper, being the second of a sequence of two on the Multispectral Imager describes the VNS instrument concept chosen to fulfil the performance requirements within the resource and accommodation constraints.

  3. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  4. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    NASA Astrophysics Data System (ADS)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  5. Overhead Projector Demonstrations.

    ERIC Educational Resources Information Center

    Kolb, Doris, Ed.

    1987-01-01

    Describes several chemistry demonstrations that use an overhead projector. Some of the demonstrations deal with electrochemistry, and another deals with the reactions of nonvolatile immiscible liquid in water. (TW)

  6. The s-Ordered Fock Space Projectors Gained by the General Ordering Theorem

    NASA Astrophysics Data System (ADS)

    Farid, Shähandeh; Mohammad, Reza Bazrafkan; Mahmoud, Ashrafi

    2012-09-01

    Employing the general ordering theorem (GOT), operational methods and incomplete 2-D Hermite polynomials, we derive the t-ordered expansion of Fock space projectors. Using the result, the general ordered form of the coherent state projectors is obtained. This indeed gives a new integration formula regarding incomplete 2-D Hermite polynomials. In addition, the orthogonality relation of the incomplete 2-D Hermite polynomials is derived to resolve Dattoli's failure.

  7. New Planetariums For Old

    NASA Astrophysics Data System (ADS)

    Peterson, David

    2005-11-01

    The audio and visual capabilities of the planetarium at Francis Marion University were upgraded in Fall 2004 to incorporate three Barco CRT projectors and surround sound. Controlled by the Astro-FX media manager system developed by Bowen Technovation, the projectors focus on the 33 foot dome installed in 1978 for the Spitz 512 Star projector. The significant additional capabilities of the new combined systems will be presented together with a review of the planetarium renovation procedure.

  8. Tampa Bay, St. Petersburg, Florida, USA

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This photo of the Tampa Bay, St. Petersburg, Florida (28.0N, 82.5W) is one of a pair (see STS049-92-017) to compare the differences between color film and color infrared film. In the color image above, the scene appears as it would to the human eye. The city of St. Petersburg can be seen even though there is atmospheric haze obscuring the image. Color infrared film filters out the haze and portrays vegetation as shades of red or pink.

  9. Tampa Bay, St. Petersburg, Florida, USA

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This photo of the Tampa Bay, St. Petersburg, Florida (28.0N, 82.5W) is one of a pair (see STS049-97-020) to compare the differences between color film and color infrared film. In the color image above, the scene appears as it would to the human eye. The city of St. Petersburg can be seen even though there is atmospheric haze obscuring the image. Color infrared film filters out the haze and portrays vegetation as shades of red or pink.

  10. Infrared Cephalic-Vein to Assist Blood Extraction Tasks: Automatic Projection and Recognition

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; Gesto, M.; Riveiro, B.; González-Aguilera, D.

    2017-05-01

    Thermal infrared band is not commonly used in photogrammetric and computer vision algorithms, mainly due to the low spatial resolution of this type of imagery. However, this band captures sub-superficial information, increasing the capabilities of visible bands regarding applications. This fact is especially important in biomedicine and biometrics, allowing the geometric characterization of interior organs and pathologies with photogrammetric principles, as well as the automatic identification and labelling using computer vision algorithms. This paper presents advances of close-range photogrammetry and computer vision applied to thermal infrared imagery, with the final application of Augmented Reality in order to widen its application in the biomedical field. In this case, the thermal infrared image of the arm is acquired and simultaneously projected on the arm, together with the identification label of the cephalic-vein. This way, blood analysts are assisted in finding the vein for blood extraction, especially in those cases where the identification by the human eye is a complex task. Vein recognition is performed based on the Gaussian temperature distribution in the area of the vein, while the calibration between projector and thermographic camera is developed through feature extraction and pattern recognition. The method is validated through its application to a set of volunteers, with different ages and genres, in such way that different conditions of body temperature and vein depth are covered for the applicability and reproducibility of the method.

  11. ASTER First Views of Red Sea, Ethiopia - Thermal-Infrared TIR Image monochrome

    NASA Image and Video Library

    2000-03-11

    ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02452

  12. Detection of latent bloodstains beneath painted surfaces using reflected infrared photography.

    PubMed

    Farrar, Andrew; Porter, Glenn; Renshaw, Adrian

    2012-09-01

    Bloodstain evidence is a highly valued form of physical evidence commonly found at scenes involving violent crimes. However, painting over bloodstains will often conceal this type of evidence. There is limited research in the scientific literature that describes methods of detecting painted-over bloodstains. This project employed a modified digital single-lens reflex camera to investigate the effectiveness of infrared (IR) photography in detecting latent bloodstain evidence beneath a layer or multiple layers of paint. A qualitative evaluation was completed by comparing images taken of a series of samples using both IR and standard (visible light) photography. Further quantitative image analysis was used to verify the findings. Results from this project indicate that bloodstain evidence can be detected beneath up to six layers of paint using reflected IR; however, the results vary depending on the characteristics of the paint. This technique provides crime scene specialists with a new field method to assist in locating, visualizing, and documenting painted-over bloodstain evidence. © 2012 American Academy of Forensic Sciences.

  13. A novel design of membrane mirror with small deformation and imaging performance analysis in infrared system

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang

    2017-05-01

    A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.

  14. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  15. A simple and high-resolution stereolithography-based 3D bioprinting system using visible light crosslinkable bioinks.

    PubMed

    Wang, Zongjie; Abdulla, Raafa; Parker, Benjamin; Samanipour, Roya; Ghosh, Sanjoy; Kim, Keekyoung

    2015-12-22

    Bioprinting is a rapidly developing technique for biofabrication. Because of its high resolution and the ability to print living cells, bioprinting has been widely used in artificial tissue and organ generation as well as microscale living cell deposition. In this paper, we present a low-cost stereolithography-based bioprinting system that uses visible light crosslinkable bioinks. This low-cost stereolithography system was built around a commercial projector with a simple water filter to prevent harmful infrared radiation from the projector. The visible light crosslinking was achieved by using a mixture of polyethylene glycol diacrylate (PEGDA) and gelatin methacrylate (GelMA) hydrogel with eosin Y based photoinitiator. Three different concentrations of hydrogel mixtures (10% PEG, 5% PEG + 5% GelMA, and 2.5% PEG + 7.5% GelMA, all w/v) were studied with the presented systems. The mechanical properties and microstructure of the developed bioink were measured and discussed in detail. Several cell-free hydrogel patterns were generated to demonstrate the resolution of the solution. Experimental results with NIH 3T3 fibroblast cells show that this system can produce a highly vertical 3D structure with 50 μm resolution and 85% cell viability for at least five days. The developed system provides a low-cost visible light stereolithography solution and has the potential to be widely used in tissue engineering and bioengineering for microscale cell patterning.

  16. Motion detection and compensation in infrared retinal image sequences.

    PubMed

    Scharcanski, J; Schardosim, L R; Santos, D; Stuchi, A

    2013-01-01

    Infrared image data captured by non-mydriatic digital retinography systems often are used in the diagnosis and treatment of the diabetic macular edema (DME). Infrared illumination is less aggressive to the patient retina, and retinal studies can be carried out without pupil dilation. However, sequences of infrared eye fundus images of static scenes, tend to present pixel intensity fluctuations in time, and noisy and background illumination changes pose a challenge to most motion detection methods proposed in the literature. In this paper, we present a retinal motion detection method that is adaptive to background noise and illumination changes. Our experimental results indicate that this method is suitable for detecting retinal motion in infrared image sequences, and compensate the detected motion, which is relevant in retinal laser treatment systems for DME. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Autocalibration of multiprojector CAVE-like immersive environments.

    PubMed

    Sajadi, Behzad; Majumder, Aditi

    2012-03-01

    In this paper, we present the first method for the geometric autocalibration of multiple projectors on a set of CAVE-like immersive display surfaces including truncated domes and 4 or 5-wall CAVEs (three side walls, floor, and/or ceiling). All such surfaces can be categorized as swept surfaces and multiple projectors can be registered on them using a single uncalibrated camera without using any physical markers on the surface. Our method can also handle nonlinear distortion in the projectors, common in compact setups where a short throw lens is mounted on each projector. Further, when the whole swept surface is not visible from a single camera view, we can register the projectors using multiple pan and tilted views of the same camera. Thus, our method scales well with different size and resolution of the display. Since we recover the 3D shape of the display, we can achieve registration that is correct from any arbitrary viewpoint appropriate for head-tracked single-user virtual reality systems. We can also achieve wallpapered registration, more appropriate for multiuser collaborative explorations. Though much more immersive than common surfaces like planes and cylinders, general swept surfaces are used today only for niche display environments. Even the more popular 4 or 5-wall CAVE is treated as a piecewise planar surface for calibration purposes and hence projectors are not allowed to be overlapped across the corners. Our method opens up the possibility of using such swept surfaces to create more immersive VR systems without compromising the simplicity of having a completely automatic calibration technique. Such calibration allows completely arbitrary positioning of the projectors in a 5-wall CAVE, without respecting the corners.

  18. Aberration analysis of the putative projector for Lorenzo Lotto's Husband and wife: image analysis through computer ray-tracing

    NASA Astrophysics Data System (ADS)

    Robinson, Dirk; Stork, David G.

    2008-02-01

    A recent theory claims that the late-Italian Renaissance painter Lorenzo Lotto secretly built a concave-mirror projector to project an image of a carpet onto his canvas and trace it during the execution of Husband and wife (c. 1543). Key evidence adduced to support this claim includes "perspective anomalies" and changes in "magnification" that the theory's proponents ascribe to Lotto refocusing his projector to overcome its limitations in depth of field. We find, though, that there are important geometrical constraints upon such a putative optical projector not incorporated into the proponents' analyses, and that when properly included, the argument for the use of optics loses its force. We used Zemax optical design software to create a simple model of Lotto's studio and putative projector, and incorporated the optical properties proponents inferred from geometrical properties of the depicted carpet. Our central contribution derives from including the 116-cm-wide canvas screen; we found that this screen forces the incident light to strike the concave mirror at large angles (>= 15°) and that this, in turn, means that the projected image would reveal severe off-axis aberrations, particularly astigmatism. Such aberrations are roughly as severe as the defocus blur claimed to have led Lotto to refocus the projector. In short, we find that the projected images would not have gone in and out of focus in the way claimed by proponents, a result that undercuts their claim that Lotto used a projector for this painting. We speculate on the value of further uses of sophisticated ray-tracing analyses in the study of fine arts.

  19. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  20. Overhead Projector Demonstrations.

    ERIC Educational Resources Information Center

    Kolb, Doris, Ed.

    1989-01-01

    Described are demonstrations of the optical activity of two sugar solutions, and the effects of various substituents on acid strength using an overhead projector. Materials and procedures for each demonstration are discussed. (CW)

  1. Bloodstain detection and discrimination impacted by spectral shift when using an interference filter-based visible and near-infrared multispectral crime scene imaging system

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.

    2018-03-01

    Bloodstain detection and discrimination from nonblood substances on various substrates are critical in forensic science as bloodstains are a critical source for confirmatory DNA tests. Conventional bloodstain detection methods often involve time-consuming sample preparation, a chance of harm to investigators, the possibility of destruction of blood samples, and acquisition of too little data at crime scenes either in the field or in the laboratory. An imaging method has the advantages of being nondestructive, noncontact, real-time, and covering a large field-of-view. The abundant spectral information provided by multispectral imaging makes it a potential presumptive bloodstain detection and discrimination method. This article proposes an interference filter (IF) based area scanning three-spectral-band crime scene imaging system used for forensic bloodstain detection and discrimination. The impact of large angle of views on the spectral shift of calibrated IFs is determined, for both detecting and discriminating bloodstains from visually similar substances on multiple substrates. Spectral features in the visible and near-infrared portion employed by the relative band depth method are used. This study shows that 1 ml bloodstain on black felt, gray felt, red felt, white cotton, white polyester, and raw wood can be detected. Bloodstains on the above substrates can be discriminated from cola, coffee, ketchup, orange juice, red wine, and green tea.

  2. The algebraic theory of latent projectors in lambda matrices

    NASA Technical Reports Server (NTRS)

    Denman, E. D.; Leyva-Ramos, J.; Jeon, G. J.

    1981-01-01

    Multivariable systems such as a finite-element model of vibrating structures, control systems, and large-scale systems are often formulated in terms of differential equations which give rise to lambda matrices. The present investigation is concerned with the formulation of the algebraic theory of lambda matrices and the relationship of latent roots, latent vectors, and latent projectors to the eigenvalues, eigenvectors, and eigenprojectors of the companion form. The chain rule for latent projectors and eigenprojectors for the repeated latent root or eigenvalues is given.

  3. 40. View of dual projector system located in MWOC facility ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    40. View of dual projector system located in MWOC facility in transmitter building no. 102 by Bessler Company. System used to project images in MWOC on backlit screen system with fiber optic electro/mechanical system linked to computer output to indicate information on screen linked with display from projector system. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  4. Landsat-8: Status and on-orbit performance

    USGS Publications Warehouse

    Markham, Brian L; Barsi, Julia A.; Morfitt, Ron; Choate, Michael J.; Montanaro, Matthew; Arvidson, Terry; Irons, James R.

    2015-01-01

    Landsat 8 and its two Earth imaging sensors, the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) have been operating on-orbit for 2 ½ years. Landsat 8 has been acquiring substantially more images than initially planned, typically around 700 scenes per day versus a 400 scenes per day requirement, acquiring nearly all land scenes. Both the TIRS and OLI instruments are exceeding their SNR requirements by at least a factor of 2 and are very stable, degrading by at most 1% in responsivity over the mission to date. Both instruments have 100% operable detectors covering their cross track field of view using the redundant detectors as necessary. The geometric performance is excellent, meeting or exceeding all performance requirements. One anomaly occurred with the TIRS Scene Select Mirror (SSM) encoder that affected its operation, though by switching to the side B electronics, this was fully recovered. The one challenge is with the TIRS stray light, which affects the flat fielding and absolute calibration of the TIRS data. The error introduced is smaller in TIRS band 10. Band 11 should not currently be used in science applications.

  5. Clutter characterization within segmented hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Kacenjar, Steve T.; Hoffberg, Michael; North, Patrick

    2007-10-01

    Use of a Mean Class Propagation Model (MCPM) has been shown to be an effective approach in the expedient propagation of hyperspectral data scenes through the atmosphere. In this approach, real scene data are spatially subdivided into regions of common spectral properties. Each sub-region which we call a class possesses two important attributes (1) the mean spectral radiance and (2) the spectral covariance. The use of this attributes can significantly improve throughput performance of computing systems over conventional pixel-based methods. However, this approach assumes that background clutter can be approximated as having multivariate Gaussian distributions. Under such conditions, covariance propagations can be effectively performed from ground through the atmosphere. This paper explores this basic assumption using real-scene Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data and examines how the partitioning of the scene into smaller and smaller segments influences local clutter characterization. It also presents a clutter characterization metric that helps explain the migration of the magnitude of statistical clutter from parent class to child sub-classes populations. It is shown that such a metric can be directly related to an approximate invariant between the parent class and its child classes.

  6. Use of an Infrared Thermometer with Laser Targeting in Morphological Scene Change Detection for Fire Detection

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.

    2013-06-01

    Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.

  7. A neighboring structure reconstructed matching algorithm based on LARK features

    NASA Astrophysics Data System (ADS)

    Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-11-01

    Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.

  8. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  9. Overhead Projector Demonstrations.

    ERIC Educational Resources Information Center

    Kolb, Doris, Ed.

    1989-01-01

    Included are demonstrations using the overhead projector to show change in optical rotation with wavelength and aromatic pi cloud availability, and formation of colored charge-transfer complexes. Instructional techniques unique to these topics are discussed. (CW)

  10. Cloud tolerance of remote sensing technologies to measure land surface temperature

    USDA-ARS?s Scientific Manuscript database

    Conventional means to estimate land surface temperature (LST) from space relies on the thermal infrared (TIR) spectral window and is limited to cloud-free scenes. To also provide LST estimates during periods with clouds, a new method was developed to estimate LST based on passive microwave (MW) obse...

  11. Dynamic thermal signature prediction for real-time scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.; Swierkowski, Leszek

    2013-05-01

    At DSTO, a real-time scene generation framework, VIRSuite, has been developed in recent years, within which trials data are predominantly used for modelling the radiometric properties of the simulated objects. Since in many cases the data are insufficient, a physics-based simulator capable of predicting the infrared signatures of objects and their backgrounds has been developed as a new VIRSuite module. It includes transient heat conduction within the materials, and boundary conditions that take into account the heat fluxes due to solar radiation, wind convection and radiative transfer. In this paper, an overview is presented, covering both the steady-state and transient performance.

  12. High resolution satellite observations of mesoscale oceanography in the Tasman Sea, 1978 - 1979

    NASA Technical Reports Server (NTRS)

    Nilsson, C. S.; Andrews, J. C.; Hornibrook, M.; Latham, A. R.; Speechley, G. C.; Scully-Power, P. (Principal Investigator)

    1982-01-01

    Of the Nearly 1000 standard infrared photographic images received, 273 images were on computer compatible tape. It proved necessary to digitally enhance the scene contrast to cover only a select few degrees K over the photographic grey scale appropriate to the scene-specific range of sea surface temperature (SST). Some 178 images were so enhanced. Comparison with sea truth show that SST, as seen by satellite, provides a good guide to the ocean currents and eddies off East Australia, both in summer and winter. This is in contrast, particularly in summer, to SST mapped by surface survey, which usually lacks the necessary spatial resolution.

  13. Landsat Thematic Mapper Image Mosaic of Colorado

    USGS Publications Warehouse

    Cole, Christopher J.; Noble, Suzanne M.; Blauer, Steven L.; Friesen, Beverly A.; Bauer, Mark A.

    2010-01-01

    The U.S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC) produced a seamless, cloud-minimized remotely-sensed image spanning the State of Colorado. Multiple orthorectified Landsat 5 Thematic Mapper (TM) scenes collected during 2006-2008 were spectrally normalized via reflectance transformation and linear regression based upon pseudo-invariant features (PIFS) following the removal of clouds. Individual Landsat scenes were then mosaicked to form a six-band image composite spanning the visible to shortwave infrared spectrum. This image mosaic, presented here, will also be used to create a conifer health classification for Colorado in Scientific Investigations Map 3103. An archive of past and current Landsat imagery exists and is available to the scientific community (http://glovis.usgs.gov/), but significant pre-processing was required to produce a statewide mosaic from this information. Much of the data contained perennial cloud cover that complicated analysis and classification efforts. Existing Landsat mosaic products, typically three band image composites, did not include the full suite of multispectral information necessary to produce this assessment, and were derived using data collected in 2001 or earlier. A six-band image mosaic covering Colorado was produced. This mosaic includes blue (band 1), green (band 2), red (band 3), near infrared (band 4), and shortwave infrared information (bands 5 and 7). The image composite shown here displays three of the Landsat bands (7, 4, and 2), which are sensitive to the shortwave infrared, near infrared, and green ranges of the electromagnetic spectrum. Vegetation appears green in this image, while water looks black, and unforested areas appear pink. The lines that may be visible in the on-screen version of the PDF are an artifact of the export methods used to create this file. The file should be viewed at 150 percent zoom or greater for optimum viewing.

  14. Projectors get personal

    NASA Astrophysics Data System (ADS)

    Graham-Rowe, Duncan

    2007-12-01

    As the size of handheld gadgets decreases, their displays become harder to view. The solution could lie with integrated projectors that can project crisp, large images from mobile devices onto any chosen surface. Duncan Graham-Rowe reports.

  15. On the use of video projectors for three-dimensional scanning

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.; Robledo-Sanchez, Carlos; Diaz-Gonzalez, Gerardo

    2017-08-01

    Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.

  16. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  17. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  18. Thermal-depth matching in dynamic scene based on affine projection and feature registration

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Jia, Tong; Wu, Chengdong; Li, Yongqiang

    2018-03-01

    This paper aims to study the construction of 3D temperature distribution reconstruction system based on depth and thermal infrared information. Initially, a traditional calibration method cannot be directly used, because the depth and thermal infrared camera is not sensitive to the color calibration board. Therefore, this paper aims to design a depth and thermal infrared camera calibration board to complete the calibration of the depth and thermal infrared camera. Meanwhile a local feature descriptors in thermal and depth images is proposed. The belief propagation matching algorithm is also investigated based on the space affine transformation matching and local feature matching. The 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model, and has strong robustness.

  19. Detection range enhancement using circularly polarized light in scattering environments for infrared wavelengths

    DOE PAGES

    van der Laan, J. D.; Sandia National Lab.; Scrymgeour, D. A.; ...

    2015-03-13

    We find for infrared wavelengths there are broad ranges of particle sizes and refractive indices that represent fog and rain where the use of circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that for specific scene parameters circular polarization outperforms linear polarization in maintaining the intended polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Specifically, circular polarization persists bettermore » than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave infrared and the long-wave infrared, and large particle sizes of Sahara dust around the 4 micron wavelength.« less

  20. Modern Approaches to the Computation of the Probability of Target Detection in Cluttered Environments

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.

    The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.

  1. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  2. Scanning laser beam displays based on a 2D MEMS

    NASA Astrophysics Data System (ADS)

    Niesten, Maarten; Masood, Taha; Miller, Josh; Tauscher, Jason

    2010-05-01

    The combination of laser light sources and MEMS technology enables a range of display systems such as ultra small projectors for mobile devices, head-up displays for vehicles, wearable near-eye displays and projection systems for 3D imaging. Images are created by scanning red, green and blue lasers horizontally and vertically with a single two-dimensional MEMS. Due to the excellent beam quality of laser beams, the optical designs are efficient and compact. In addition, the laser illumination enables saturated display colors that are desirable for augmented reality applications where a virtual image is used. With this technology, the smallest projector engine for high volume manufacturing to date has been developed. This projector module has a height of 7 mm and a volume of 5 cc. The resolution of this projector is WVGA. No additional projection optics is required, resulting in an infinite focus depth. Unlike with micro-display projection displays, an increase in resolution will not lead to an increase in size or a decrease in efficiency. Therefore future projectors can be developed that combine a higher resolution in an even smaller and thinner form factor with increased efficiencies that will lead to lower power consumption.

  3. Speckle reduction methods in laser-based picture projectors

    NASA Astrophysics Data System (ADS)

    Akram, M. Nadeem; Chen, Xuyuan

    2016-02-01

    Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.

  4. Market trends in the projection display industry

    NASA Astrophysics Data System (ADS)

    Dash, Sweta

    2000-04-01

    The projection display industry represents a multibillion- dollar market that includes four distinct technologies. High-volume consumer products and high-value business products drive the market, with different technologies being used in different application markets. The consumer market is dominated by rear CRT technology, especially in the projection television segment. But rear LCD (liquid crystal display) and rear reflective (DLP, or Digital Light ProcessingTM) televisions are slowly emerging as future competitors to rear CRT projectors. Front CRT projectors are still popular in the high-end home theater market. Front LCD technology and front DLP technology dominate the business market. Traditional light valve technology was the only solution for applications requiring high light outputs, but new three-chip DLP projectors meet the higher light output requirements at a lower price. In the last few years the strongest growth has been in the business market for multimedia presentation applications. This growth was due to the continued increase in display pixel formats, the continued reduction in projector weight, and the improved price/performance ratio. The projection display market will grow at a significant rate during the next five years, driven by the growth in ultraportable (< 10 pound) projectors and the shift in the consumer market to digital and HDTV products.

  5. Grapheme-color synesthesia subtypes: Stable individual differences reflected in posterior alpha-band oscillations.

    PubMed

    Cohen, Michael X; Weidacker, Kathrin; Tankink, Judith; Scholte, H Steven; Rouw, Romke

    2015-01-01

    Grapheme-color synesthesia is a condition in which seeing letters and numbers produces sensations of colors (e.g., the letter R may elicit a sky-blue percept). Recent evidence implicates posterior parietal areas, in addition to lower-level sensory processing regions, in the neurobiological mechanisms involved in synesthesia. Furthermore, these mechanisms seem to differ for "projectors" (synesthetes who report seeing the color "out there in the real world") versus "associators" (synesthetes who report the color to be only an internal experience). Relatively little is known about possible electrophysiological characteristics of grapheme-color synesthesia. Here we used EEG to investigate functional oscillatory differences among associators, projectors, and non-synesthetes. Projectors had stronger stimulus-related alpha-band (~10 Hz) power over posterior parietal electrodes, compared to both associators and non-synesthetes. Posterior alpha activity was not statistically significantly different between associators from non-synesthetes. We also performed a test-retest assessment of the projector-associator score and found strong retest reliability, as evidenced by a correlation coefficient of .85. These findings demonstrate that the projector-associator distinction is highly reliable over time and is related to neural oscillations in the alpha band.

  6. Projector Center: Using an Overhead Projector to Initiate Discussion of Life and Non-Life.

    ERIC Educational Resources Information Center

    Barman, Charles R.

    1982-01-01

    Describes a demonstration and role-playing activity focusing on differences between living/dead, definitions of living/dead, and situations in which active euthanasia could be morally right. (Author/JN)

  7. Overhead Projector Demonstrations: Tilted TOPS: Inclined Plane Projection.

    ERIC Educational Resources Information Center

    Alyea, Hubert N.

    1989-01-01

    The construction and uses of a device to facilitate the use of an overhead projector to show chemical reactions is presented. Materials and instructions for construction as well as reactor vessels are discussed. (CW)

  8. The ship-borne infrared searching and tracking system based on the inertial platform

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing time by using semi-automatic searching and reliable capturing-tracking, when the lead deviation of the radar is great.

  9. NCAP projection displays

    NASA Astrophysics Data System (ADS)

    Havens, John R.; Ishioka, J.; Jones, Philip J.; Lau, Aldrich; Tomita, Akira; Asano, A.; Konuma, Nobuhiro; Sato, Kazuhiko; Takemoto, Iwao

    1997-05-01

    Projectors based on polymer-eNCAPsulated liquid crystals can provide bright displays suitable for use in conference rooms with normal lighting. Contrast is generated by light scattering among the droplets, rather than by light absorption with crossed polarizers. We have demonstrated a full-color, compact projector showing 1200 ANSI lumens with 200 watts of lamp power - a light efficiency of 6 lumens/watt. This projector is based on low-voltage NCAP material, highly reflective CMOS die, and matched illumination and projection optics. We will review each of these areas and discuss the integrated system performance.

  10. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    NASA Astrophysics Data System (ADS)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  11. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  12. Overhead Projector Demonstrations: A Classroom Demonstration of Aliphatic Substitution.

    ERIC Educational Resources Information Center

    Perina, Ivo; Mihanovic, Branka

    1989-01-01

    Presents a halogen substitution of an alkane using a compartmentalized Petri dish or Conway dish on an overhead projector. Provides methodology and several modifications for different reactions. Uses hexane, methyl orange, bromine, and silver nitrate. (MVL)

  13. Modular design of the LED vehicle projector headlamp system.

    PubMed

    Hsieh, Chi-Chang; Li, Yan-Huei; Hung, Chih-Ching

    2013-07-20

    A well designed headlamp for a vehicle lighting system is very important as it provides drivers with safe and comfortable driving conditions at night or in dark places. With the advances of the semiconductor technology, the LED has become the fourth generation lighting source in the auto industry. In this study, we will propose a LED vehicle projector headlamp system. This headlamp system contains several LED headlamp modules, and every module of it includes four components: focused LEDs, asymmetric metal-based plates, freeform surfaces, and condenser lenses. By optimizing the number of LED headlamp modules, the proposed LED vehicle projector headlamp system has only five LED headlamp modules. It not only provides the low-beam cutoff without a shield, but also meets the requirements of the ECE R112 regulation. Finally, a prototype of the LED vehicle projector headlamp system was assembled and fabricated to create the correct light pattern.

  14. The causal perturbation expansion revisited: Rescaling the interacting Dirac sea

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Grotz, Andreas

    2010-07-01

    The causal perturbation expansion defines the Dirac sea in the presence of a time-dependent external field. It yields an operator whose image generalizes the vacuum solutions of negative energy and thus gives a canonical splitting of the solution space into two subspaces. After giving a self-contained introduction to the ideas and techniques, we show that this operator is, in general, not idempotent. We modify the standard construction by a rescaling procedure giving a projector on the generalized negative-energy subspace. The resulting rescaled causal perturbation expansion uniquely defines the fermionic projector in terms of a series of distributional solutions of the Dirac equation. The technical core of the paper is to work out the combinatorics of the expansion in detail. It is also shown that the fermionic projector with interaction can be obtained from the free projector by a unitary transformation. We finally analyze the consequences of the rescaling procedure on the light-cone expansion.

  15. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  16. A spatially augmented reality sketching interface for architectural daylighting design.

    PubMed

    Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara

    2011-01-01

    We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society

  17. High-accuracy 3D measurement system based on multi-view and structured light

    NASA Astrophysics Data System (ADS)

    Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun

    2013-12-01

    3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.

  18. Application of infrared spectroscopy and pyrolysis gas chromatography for characterisation of adhesive tapes

    NASA Astrophysics Data System (ADS)

    Zięba-Palus, Janina; Nowińska, Sabina; Kowalski, Rafał

    2016-12-01

    Infrared spectroscopy and pyrolysis GC/MS were applied in the comparative analysis of adhesive tapes. By providing information about the polymer composition, it was possible to classify both backings and adhesives of tapes into defined chemical classes. It was found that samples of the same type (of backings and adhesives) and similar infrared spectra can in most cases be effectively differentiated using Py-GC/MS, sometimes based only on the presence of peaks of very low intensity originating from minor components. The results obtained enabled us to draw the conclusion that Py-GC/MS appears to be a valuable analytical technique for examining tapes, which is complementary to infrared spectroscopy. Identification of pyrolysis products enables discrimination of samples. Both methods also provide crucial information that is useful for identification of adhesive tapes found at the crime scene.

  19. IR in Norway

    NASA Astrophysics Data System (ADS)

    Haakenaasen, Randi; Lovold, Stian

    2003-01-01

    Infrared technology in Norway started at the Norwegian Defense Research Establishment (FFI) in the 1960s, and has since then spread to universities, other research institutes and industry. FFI has a large, integrated IR activity that includes research and development in IR detectors, optics design, optical coatings, advanced dewar design, modelling/simulation of IR scenes, and image analysis. Part of the integrated activity is a laboratory for more basic research in materials science and semiconductor physics, in which thin films of CdHgTe are grown by molecular beam epitaxy and processed into IR detectors by various techniques. FFI also has a lot of experience in research and development of tunable infrared lasers for various applications. Norwegian industrial activities include production of infrared homing anti-ship missiles, laser rangefinders, various infrared gas sensors, hyperspectral cameras, and fiberoptic sensor systems for structural health monitoring and offshore oil well diagnostics.

  20. A Forensic Experiment: The Case of the Crime at the Cinema

    ERIC Educational Resources Information Center

    Valente Nabais, J. M.; Costa, Sara D.

    2017-01-01

    This paper reports an experimental activity where students have to carefully analyze the evidence collected at the crime scene, namely fibers and lipstick traces. The fibers are analyzed by infrared spectroscopy, solubility tests, and optical microscopy, while in turn the lipstick traces are investigated by thin layer chromatography. Students also…

  1. Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture.

    PubMed

    Narayanan, Balaji; Hardie, Russell C; Muse, Robert A

    2005-06-10

    Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.

  2. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  3. An Automatic Multi-Target Independent Analysis Framework for Non-Planar Infrared-Visible Registration.

    PubMed

    Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Zhao, Zishu; Li, Yuankun

    2017-07-26

    In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.

  4. Wide area lithologic mapping with ASTER thermal infrared data: Case studies for the regions in/around the Pamir Mountains and the Tarim basin

    NASA Astrophysics Data System (ADS)

    Ninomiya, Yoshiki; Fu, Bihong

    2017-07-01

    After the authors have proposed the mineralogical indices, e.g., Quartz Index (QI), Carbonate Index (CI), Mafic Index (MI) for ASTER thermal infrared (TIR) data, many articles have been applied the indices for the geological case studies and proved to be robust in extracting geological information at the local scale. The authors also have developed a system for producing the regional map with the indices, which needs mosaicking of many scenes considering the relatively narrow spatial coverage of each ASTER scene. The system executes the procedures very efficiently to find ASTER data covering a wide target area in the vast and expanding ASTER data archive. Then the searched ASTER data are conditioned, prioritized, and the indices are calculated before finally mosaicking the imagery. Here in this paper, we will present two case studies of the regional lithologic and mineralogic mapping of the indices covering very wide regions in and around the Pamir Mountains and the Tarim basin. The characteristic features of the indices related to geology are analysed, interpreted and discussed.

  5. Centripetal Force on an Overhead Projector.

    ERIC Educational Resources Information Center

    Rheam, Harry

    1995-01-01

    Describes two simple demonstrations of an object moving in a straight line tangent to the circle if centripetal force is removed. Demonstrations use a pie plate and petri dish with ball bearings to illustrate the phenomena on an overhead projector. (LZ)

  6. Infrared hyperspectral imaging for chemical vapour detection

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.; Howle, C. R.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. One such area is the detection and identification of chemicals, especially toxic chemical warfare agents, through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype midwave infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light at a range of wavelengths. While there are a number of illumination methods, the chosen configuration illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around an IR optical parametric oscillator (OPO) source with broad tunability, operating in the 2.6 to 3.7 μm (MWIR) and 1.5 to 1.8 μm (shortwave IR, SWIR) spectral regions. The MWIR beam was primarily used as it addressed the fundamental absorption features of the target compounds compared to the overtone and combination bands in the SWIR region, which can be less intense by more than an order of magnitude. We show that a prototype NCI instrument was able to locate hydrocarbon materials at distances up to 15 metres.

  7. Image simulation for HardWare In the Loop simulation in EO domain

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Latger, Jean

    2015-10-01

    Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".

  8. Polarimetric infrared imaging simulation of a synthetic sea surface with Mie scattering.

    PubMed

    He, Si; Wang, Xia; Xia, Runqiu; Jin, Weiqi; Liang, Jian'an

    2018-03-01

    A novel method to simulate the polarimetric infrared imaging of a synthetic sea surface with atmospheric Mie scattering effects is presented. The infrared emission, multiple reflections, and infrared polarization of the sea surface and the Mie scattering of aerosols are all included for the first time. At first, a new approach to retrieving the radiative characteristics of a wind-roughened sea surface is introduced. A two-scale method of sea surface realization and the inverse ray tracing of light transfer calculation are combined and executed simultaneously, decreasing the consumption of time and memory dramatically. Then the scattering process that the infrared light emits from the sea surface and propagates in the aerosol particles is simulated with a polarized light Monte Carlo model. Transformations of the polarization state of the light are calculated with the Mie theory. Finally, the polarimetric infrared images of the sea surface of different environmental conditions and detection parameters are generated based on the scattered light detected by the infrared imaging polarimeter. The results of simulation examples show that our polarimetric infrared imaging simulation can be applied to predict the infrared polarization characteristics of the sea surface, model the oceanic scene, and guide the detection in the oceanic environment.

  9. Personal projection with Ujoy technology

    NASA Astrophysics Data System (ADS)

    Moench, Holger; Mackens, Uwe; Pekarski, Pavel; Ritz, Arnd; S'heeren, Griet; Verbeek, Will

    2007-02-01

    Personal projection is a new way to use projectors for gaming, entertainment or photo projection. The requirements for this new category have been defined based on market research with focus groups. A screen brightness of 200-300lm out of compact and affordable devices is a must. In order to reach this performance a very bright light source is at least as important as for professional projectors. The new 50W Ujoy lamp system with 1mm arc enables efficient projection systems. Lower cooling requirements, the potential for battery operation and the low voltage input makes it the ideal source for this new category of projectors.

  10. The fermionic projector in a time-dependent external potential: Mass oscillation property and Hadamard states

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Murro, Simone; Röken, Christian

    2016-07-01

    We give a non-perturbative construction of the fermionic projector in Minkowski space coupled to a time-dependent external potential which is smooth and decays faster than quadratically for large times. The weak and strong mass oscillation properties are proven. We show that the integral kernel of the fermionic projector is of the Hadamard form, provided that the time integral of the spatial sup-norm of the potential satisfies a suitable bound. This gives rise to an algebraic quantum field theory of Dirac fields in an external potential with a distinguished pure quasi-free Hadamard state.

  11. BRDF-dependent accuracy of array-projection-based 3D sensors.

    PubMed

    Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther

    2017-03-10

    In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.

  12. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  13. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  14. Synthetic Scene Generation of the Stennis V and V Target Range for the Calibration of Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Cao, Chang-Yong; Blonski, Slawomir; Ryan, Robert; Gasser, Jerry; Zanoni, Vicki

    1999-01-01

    The verification and validation (V&V) target range developed at Stennis Space Center is a useful test site for the calibration of remote sensing systems. In this paper, we present a simple algorithm for generating synthetic radiance scenes or digital models of this target range. The radiation propagation for the target in the solar reflective and thermal infrared spectral regions is modeled using the atmospheric radiative transfer code MODTRAN 4. The at-sensor, in-band radiance and spectral radiance for a given sensor at a given altitude is predicted. Software is developed to generate scenes with different spatial and spectral resolutions using the simulated at-sensor radiance values. The radiometric accuracy of the simulation is evaluated by comparing simulated with AVIRIS acquired radiance values. The results show that in general there is a good match between AVIRIS sensor measured and MODTRAN predicted radiance values for the target despite the fact that some anomalies exist. Synthetic scenes provide a cost-effective way for in-flight validation of the spatial and radiometric accuracy of the data. Other applications include mission planning, sensor simulation, and trade-off analysis in sensor design.

  15. Considerations in Using Computer for Presentation.

    ERIC Educational Resources Information Center

    Lee, Shih-chung

    1997-01-01

    Addresses issues to consider in conducting computer presentations. Discusses presentation devices--television, multiscan capable monitor, LCD (liquid crystal display) panel with overhead projector, and video/RGB (red, green, blue) projector; lighting; audience size; and types of presentations--fast/short time multimedia presentations, oral and…

  16. Spectroscopy on the Overhead Projector.

    ERIC Educational Resources Information Center

    Solomon, Sally; And Others

    1994-01-01

    Any overhead projector easily can be converted into a simple spectrometer by placing a piece of diffraction grating over the projecting lens. A detailed description of the apparatus and suggested spectroscopy experiments are included. Demonstrations can utilize solutions of cobalt chloride, potassium permanganate, potassium dichromate, or…

  17. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  18. Evaluation of Landscape Structure Using AVIRIS Quicklooks and Ancillary Data

    NASA Technical Reports Server (NTRS)

    Sanderson, Eric W.; Ustin, Susan L.

    1998-01-01

    Currently the best tool for examining landscape structure is remote sensing, because remotely sensed data provide complete and repeatable coverage over landscapes in many climatic regimes. Many sensors, with a variety of spatial scales and temporal repeat cycles, are available. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has imaged over 4000 scenes from over 100 different sites throughout North America. For each of these scenes, one-band "quicklook" images have been produced for review by AVIRIS investigators. These quicklooks are free, publicly available over the Internet, and provide the most complete set of landscape structure data yet produced. This paper describes the methodologies used to evaluate the landscape structure of quicklooks and generate corresponding datasets for climate, topography and land use. A brief discussion of preliminary results is included at the end. Since quicklooks correspond exactly to their parent AVIRIS scenes, the methods used to derive climate, topography and land use data should be applicable to any AVIRIS analysis.

  19. Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter

    2016-05-01

    Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.

  20. Efficiency enhancement of liquid crystal projection displays using light recycle technology

    NASA Technical Reports Server (NTRS)

    Wang, Y.

    2002-01-01

    A new technology developed at JPL using low absorption color filters with polarization and color recycle system, is able to enhance efficiency of a single panel liquid crytal display (LCD) projector to the same efficiency of a 3 panel LCD projector.

  1. Effects of mold design of aspheric projector lens for head up display

    NASA Astrophysics Data System (ADS)

    Chen, Chao-Chang A.; Tang, Jyun-Cing; Teng, Lin-Ming

    2010-08-01

    This paper investigates the mold design and related effects on an aspheric projector lens for Head Up Display (HUD) with injection molding process. Injection flow analysis with a commercial software, Moldex3D has been used to simulate this projector lens for filling, packing, shrinkage, and flow-induced residual stress. This projector lens contains of variant thickness due to different aspheric design on both surfaces. Defects may be induced as the melt front from the gate into the cavity with jet-flow phenomenon, short shot, weld line, and even shrinkage. Thus, this paper performs a gate design to find the significant parameters including injection velocity, melt temperature, and mold temperature. After simulation by the Moldex3D, gate design for the final assembly of Head Up Display (HUD) has been obtained and then experimental tests have been proceeded for verification of short-shot, weight variation, and flow-induced stress. Moreover, warpage analysis of the Head Up Display (HUD) can be integrated with the optical design specification in future work.

  2. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  3. Field-Sequential Electronic Stereoscopic Projector

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    1989-07-01

    Culminating a research and development project spanning many years, StereoGraphics Corporation has succeeded in bringing to market the first field-sequential electronic stereoscopic projector. The product is based on a modification of Electrohome and Barco projectors. Our design goal was to produce a projector capable of displaying an image on a six-foot (or larger) diagonal screen for an audience of 50 or 60 people, or for an individual using a simulator. A second goal was to produce an image that required only passive polarizing glasses rather than powered, tethered visors. Two major design challenges posed themselves. First, it was necessary to create an electro-optical modulator which could switch the characteristic of polarized light at field rate, and second, it was necessary to produce a bright green CRT with short persistence to prevent crosstalk between left and right fields. To solve the first problem, development was undertaken to produce the required electro-optical modulator. The second problem was solved with the help of a vendor specializing in high performance CRT's.

  4. Thermal management of thermoacoustic sound projectors using a free-standing carbon nanotube aerogel sheet as a heat source.

    PubMed

    Aliev, Ali E; Mayo, Nathanael K; Baughman, Ray H; Avirovik, Dragan; Priya, Shashank; Zarnetske, Michael R; Blottman, John B

    2014-10-10

    Carbon nanotube (CNT) aerogel sheets produce smooth-spectra sound over a wide frequency range (1-10(5) Hz) by means of thermoacoustic (TA) sound generation. Protective encapsulation of CNT sheets in inert gases between rigid vibrating plates provides resonant features for the TA sound projector and attractive performance at needed low frequencies. Energy conversion efficiencies in air of 2% and 10% underwater, which can be enhanced by further increasing the modulation temperature. Using a developed method for accurate temperature measurements for the thin aerogel CNT sheets, heat dissipation processes, failure mechanisms, and associated power densities are investigated for encapsulated multilayered CNT TA heaters and related to the thermal diffusivity distance when sheet layers are separated. Resulting thermal management methods for high applied power are discussed and deployed to construct efficient and tunable underwater sound projector for operation at relatively low frequencies, 10 Hz-10 kHz. The optimal design of these TA projectors for high-power SONAR arrays is discussed.

  5. Projection display technology and product trends

    NASA Astrophysics Data System (ADS)

    Kahn, Frederic J.

    1999-05-01

    Major technology and market trends that could generate a 20 billion dollar electronic projector market by 2010 are reviewed in the perspective of recent product introductions. A log linear analysis shows that the light outputs of benchmark transportable data video projectors have increased at a rate of almost 90 percent per year since 1993. The list prices of these same projectors have decreased at a rate of over 40 percent per year. The tradeoffs of light output vs. resolution and weight are illustrated. Recent trends in projector efficacy vs. year are discussed. Lumen output per dollar of list price is shown to be a useful market metric. Continued technical advances and innovations including higher throughput light valve technologies with integrated drivers, brighter light source, field sequential color, integrated- and micro-optical components, and aerospace materials are likely to sustain these trends. The new technologies will enable projection displays for entertainment and computer applications with unprecedented levels of performance, compactness, and cost-effectiveness.

  6. Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor.

    PubMed

    Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong

    2016-12-29

    When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications.

  7. Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor

    PubMed Central

    Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong

    2016-01-01

    When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. PMID:28036073

  8. Human Perceptual Performance With Nonliteral Imagery: Region Recognition and Texture-Based Segmentation

    ERIC Educational Resources Information Center

    Essock, Edward A.; Sinai, Michael J.; DeFord, Kevin; Hansen, Bruce C.; Srinivasan, Narayanan

    2004-01-01

    In this study the authors address the issue of how the perceptual usefulness of nonliteral imagery should be evaluated. Perceptual performance with nonliteral imagery of natural scenes obtained at night from infrared and image-intensified sensors and from multisensor fusion methods was assessed to relate performance on 2 basic perceptual tasks to…

  9. Studying the Earth from space

    USGS Publications Warehouse

    ,

    1981-01-01

    scene, as man sees it, on film sensitive to that part of the electromagnetic spectrum called visible energy. Electromagnetic energy travels in waves of various lengths; most are invisible to the human eye. Wavelengths progressively longer than those that the eye can see are infrared and microwave. Wavelengths progressively shorter than the eye can see are ultraviolet, X-rays, and gamma rays.

  10. Combined use of visible, reflected infrared, and thermal infrared images for mapping Hawaiian lava flows

    NASA Technical Reports Server (NTRS)

    Abrams, Michael; Abbott, Elsa; Kahle, Anne

    1991-01-01

    The weathering of Hawaiian basalts is accompanied by chemical and physical changes of the surfaces. These changes have been mapped using remote sensing data from the visible and reflected infrared and thermal infrared wavelength regions. They are related to the physical breakdown of surface chill coats, the development and erosion of silica coatings, the oxidation of mafic minerals, and the development of vegetation cover. These effects show systematic behavior with age and can be mapped using the image data and related to relative ages of pahoehoe and aa flows. The thermal data are sensitive to silica rind development and fine structure of the scene; the reflectance data show the degree of oxidation and differentiate vegetation from aa and cinders. Together, data from the two wavelength regions show more than either separately. The combined data potentially provide a powerful tool for mapping basalt flows in arid to semiarid volcanic environments.

  11. Edge enhancement and noise suppression for infrared image based on feature analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  12. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  13. Imaging live humans through smoke and flames using far-infrared digital holography.

    PubMed

    Locatelli, M; Pugliese, E; Paturzo, M; Bianco, V; Finizio, A; Pelagotti, A; Poggi, P; Miccio, L; Meucci, R; Ferraro, P

    2013-03-11

    The ability to see behind flames is a key challenge for the industrial field and particularly for the safety field. Development of new technologies to detect live people through smoke and flames in fire scenes is an extremely desirable goal since it can save human lives. The latest technologies, including equipment adopted by fire departments, use infrared bolometers for infrared digital cameras that allow users to see through smoke. However, such detectors are blinded by flame-emitted radiation. Here we show a completely different approach that makes use of lensless digital holography technology in the infrared range for successful imaging through smoke and flames. Notably, we demonstrate that digital holography with a cw laser allows the recording of dynamic human-size targets. In this work, easy detection of live, moving people is achieved through both smoke and flames, thus demonstrating the capability of digital holography at 10.6 μm.

  14. Interpretation of multispectral and infrared thermal surveys of the Suez Canal Zone, Egypt

    NASA Technical Reports Server (NTRS)

    Elshazly, E. M.; Hady, M. A. A. H.; Hafez, M. A. A.; Salman, A. B.; Morsy, M. A.; Elrakaiby, M. M.; Alaassy, I. E. E.; Kamel, A. F.

    1977-01-01

    Remote sensing airborne surveys were conducted, as part of the plan of rehabilitation, of the Suez Canal Zone using I2S multispectral camera and Bendix LN-3 infrared passive scanner. The multispectral camera gives four separate photographs for the same scene in the blue, green, red, and near infrared bands. The scanner was operated in the microwave bands of 8 to 14 microns and the thermal surveying was carried out both at night and in the day time. The surveys, coupled with intensive ground investigations, were utilized in the construction of new geological, structural lineation and drainage maps for the Suez Canal Zone on a scale of approximately 1:20,000, which are superior to the maps made by normal aerial photography. A considerable number of anomalies belonging to various types were revealed through the interpretation of the executed multispectral and infrared thermal surveys.

  15. Infrared small target detection based on multiscale center-surround contrast measure

    NASA Astrophysics Data System (ADS)

    Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei

    2018-04-01

    Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.

  16. Tree Canopy Characterization for EO-1 Reflective and Thermal Infrared Validation Studies: Rochester, New York

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Smith, James A.

    2002-01-01

    The tree canopy characterization presented herein provided ground and tree canopy data for different types of tree canopies in support of EO-1 reflective and thermal infrared validation studies. These characterization efforts during August and September of 2001 included stem and trunk location surveys, tree structure geometry measurements, meteorology, and leaf area index (LAI) measurements. Measurements were also collected on thermal and reflective spectral properties of leaves, tree bark, leaf litter, soil, and grass. The data presented in this report were used to generate synthetic reflective and thermal infrared scenes and images that were used for the EO-1 Validation Program. The data also were used to evaluate whether the EO-1 ALI reflective channels can be combined with the Landsat-7 ETM+ thermal infrared channel to estimate canopy temperature, and also test the effects of separating the thermal and reflective measurements in time resulting from satellite formation flying.

  17. Hyperspectral and Hypertemporal Longwave Infrared Data Characterization

    NASA Astrophysics Data System (ADS)

    Jeganathan, Nirmalan

    The Army Research Lab conducted a persistent imaging experiment called the Spectral and Polarimetric Imagery Collection Experiment (SPICE) in 2012 and 2013 which focused on collecting and exploiting long wave infrared hyperspectral and polarimetric imagery. A part of this dataset was made for public release for research and development purposes. This thesis investigated the hyperspectral portion of this released dataset through data characterization and scene characterization of man-made and natural objects. First, the data were contrasted with MODerate resolution atmospheric TRANsmission (MODTRAN) results and found to be comparable. Instrument noise was characterized using an in-scene black panel, and was found to be comparable with the sensor manufacturer's specication. The temporal and spatial variation of certain objects in the scene were characterized. Temporal target detection was conducted on man-made objects in the scene using three target detection algorithms: spectral angle mapper (SAM), spectral matched lter (SMF) and adaptive coherence/cosine estimator (ACE). SMF produced the best results for detecting the targets when the training and testing data originated from different time periods, with a time index percentage result of 52.9%. Unsupervised and supervised classification were conducted using spectral and temporal target signatures. Temporal target signatures produced better visual classification than spectral target signature for unsupervised classification. Supervised classification yielded better results using the spectral target signatures, with a highest weighted accuracy of 99% for 7-class reference image. Four emissivity retrieval algorithms were applied on this dataset. However, the retrieved emissivities from all four methods did not represent true material emissivity and could not be used for analysis. This spectrally and temporally rich dataset enabled to conduct analysis that was not possible with other data collections. Regarding future work, applying noise-reduction techniques before applying temperature-emissivity retrieval algorithms may produce more realistic emissivity values, which could be used for target detection and material identification.

  18. Mid-infrared hyperspectral imaging for the detection of explosive compounds

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. A developing market is the detection and identification of energetic compounds through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype mid-infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light over a range of wavelengths. While there are a number of illumination methods, this work illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and directed and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around a MWIR optical parametric oscillator (OPO) source with broad tunability, operating at 2.6 μm to 3.7 μm. Due to material handling procedures associated with explosive compounds, experimental work was undertaken initially using simulant compounds. A second set of compounds that was tested alongside the simulant compounds is a range of confusion compounds. By having the broad wavelength tunability of the OPO, extended absorption spectra of the compounds could be obtained to aid in compound identification. The prototype imager instrument has successfully been used to record the absorption spectra for a range of compounds from the simulant and confusion sets and current work is now investigating actual explosive compounds. The authors see a very promising outlook for the MWIR hyperspectral imager. From an applications point of view this format of imaging instrument could be used for a range of standoff, improvised explosive device (IED) detection applications and potential incident scene forensic investigation.

  19. The Radiative Consistency of Atmospheric Infrared Sounder and Moderate Resolution Imaging Spectroradiometer Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Kahn, Brian H.; Fishbein, Evan; Nasiri, Shaima L.; Eldering, Annmarie; Fetzer, Eric J.; Garay, Michael J.; Lee, Sung-Yung

    2007-01-01

    The consistency of cloud top temperature (Tc) and effective cloud fraction (f) retrieved by the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU) observation suite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on the EOS-Aqua platform are investigated. Collocated AIRS and MODIS TC and f are compared via an 'effective scene brightness temperature' (Tb,e). Tb,e is calculated with partial field of view (FOV) contributions from TC and surface temperature (TS), weighted by f and 1-f, respectively. AIRS reports up to two cloud layers while MODIS reports up to one. However, MODIS reports TC, TS, and f at a higher spatial resolution than AIRS. As a result, pixel-scale comparisons of TC and f are difficult to interpret, demonstrating the need for alternatives such as Tb,e. AIRS-MODIS Tb,e differences ((Delta)Tb,e) for identical observing scenes are useful as a diagnostic for cloud quantity comparisons. The smallest values of DTb,e are for high and opaque clouds, with increasing scatter in (Delta)Tb,e for clouds of smaller opacity and lower altitude. A persistent positive bias in DTb,e is observed in warmer and low-latitude scenes, characterized by a mixture of MODIS CO2 slicing and 11-mm window retrievals. These scenes contain heterogeneous cloud cover, including mixtures of multilayered cloudiness and misplaced MODIS cloud top pressure. The spatial patterns of (Delta)Tb,e are systematic and do not correlate well with collocated AIRS-MODIS radiance differences, which are more random in nature and smaller in magnitude than (Delta)Tb,e. This suggests that the observed inconsistencies in AIRS and MODIS cloud fields are dominated by retrieval algorithm differences, instead of differences in the observed radiances. The results presented here have implications for the validation of cloudy satellite retrieval algorithms, and use of cloud products in quantitative analyses.

  20. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  1. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  2. A Formulation of Quantum Field Theory Realizing a Sea of Interacting Dirac Particles

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2011-08-01

    In this survey article, we explain a few ideas behind the fermionic projector approach and summarize recent results which clarify the connection to quantum field theory. The fermionic projector is introduced, which describes the physical system by a collection of Dirac states, including the states of the Dirac sea. Formulating the interaction by an action principle for the fermionic projector, we obtain a consistent description of interacting quantum fields which reproduces the results of perturbative quantum field theory. We find a new mechanism for the generation of boson masses and obtain small corrections to the field equations which violate causality.

  3. Overhead Projector Demonstrations: Some Ideas from the Past.

    ERIC Educational Resources Information Center

    Kolb, Doris

    1987-01-01

    Describes nine chemistry demonstrations that can be done using an overhead projector. Includes demonstrations on common ion effect, crystal formation from supersaturated solutions, making iron positive with nitric acid, optical activity, carbon dioxide in human breath, amphoteric hydroxides, the surface tension of mercury, and natural acid-base…

  4. Automated Slide Projector

    ERIC Educational Resources Information Center

    Gould, Mauri

    1975-01-01

    Describes assembly of a moderately priced synchronized projector and cassette tape recorder using a single channel recorder with a tuned amplifier to separate voice and control tones. Construction requires familiarity with transistors and use of an oscilloscope with an audio signal generator. A picture as well as schematics is provided. (GH)

  5. Uniqueness of thermodynamic projector and kinetic basis of molecular individualism

    NASA Astrophysics Data System (ADS)

    Gorban, Alexander N.; Karlin, Iliya V.

    2004-05-01

    Three results are presented: First, we solve the problem of persistence of dissipation for reduction of kinetic models. Kinetic equations with thermodynamic Lyapunov functions are studied. Uniqueness of the thermodynamic projector is proven: There exists only one projector which transforms any vector field equipped with the given Lyapunov function into a vector field with the same Lyapunov function for a given anzatz manifold which is not tangent to the Lyapunov function levels. Second, we use the thermodynamic projector for developing the short memory approximation and coarse-graining for general nonlinear dynamic systems. We prove that in this approximation the entropy production increases. ( The theorem about entropy overproduction.) In example, we apply the thermodynamic projector to derive the equations of reduced kinetics for the Fokker-Planck equation. A new class of closures is developed, the kinetic multipeak polyhedra. Distributions of this type are expected in kinetic models with multidimensional instability as universally as the Gaussian distribution appears for stable systems. The number of possible relatively stable states of a nonequilibrium system grows as 2 m, and the number of macroscopic parameters is in order mn, where n is the dimension of configuration space, and m is the number of independent unstable directions in this space. The elaborated class of closures and equations pretends to describe the effects of “molecular individualism”. This is the third result.

  6. Neonatal infrared thermography imaging: Analysis of heat flux during different clinical scenarios

    NASA Astrophysics Data System (ADS)

    Abbas, Abbas K.; Heimann, Konrad; Blazek, Vladimir; Orlikowsky, Thorsten; Leonhardt, Steffen

    2012-11-01

    IntroductionAn accurate skin temperature measurement of Neonatal Infrared Thermography (NIRT) imaging requires an appropriate calibration process for compensation of external effects (e.g. variation of environmental temperature, variable air velocity or humidity). Although modern infrared cameras can perform such calibration, an additional compensation is required for highly accurate thermography. This compensation which corrects any temperature drift should occur during the NIRT imaging process. We introduce a compensation technique which is based on modeling the physical interactions within the measurement scene and derived the detected temperature signal of the object. Materials and methodsIn this work such compensation was performed for different NIRT imaging application in neonatology (e.g. convective incubators, kangaroo mother care (KMC), and an open radiant warmer). The spatially distributed temperatures of 12 preterm infants (average gestation age 31 weeks) were measured under these different infant care arrangements (i.e. closed care system like a convective incubator, and open care system like kangaroo mother care, and open radiant warmer). ResultsAs errors in measurement of temperature were anticipated, a novel compensation method derived from infrared thermography of the neonate's skin was developed. Moreover, the differences in temperature recording for the 12 preterm infants varied from subject to subject. This variation could be arising from individual experimental setting applied to the same region of interest over the neonate's body. The experimental results for the model-based corrections is verified over the selected patient group. ConclusionThe proposed technique relies on applying model-based correction to the measured temperature and reducing extraneous errors during NIRT. This application specific method is based on different heat flux compartments present in neonatal thermography scene. Furthermore, these results are considered to be groundwork for further investigation, especially when using NIRT imaging arrangement with additional compensation settings together with reference temperature measurements.

  7. Looking At Display Technologies

    ERIC Educational Resources Information Center

    Bull, Glen; Bull, Gina

    2005-01-01

    A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…

  8. A Simple Polarimeter and Experiments Utilizing an Overhead Projector.

    ERIC Educational Resources Information Center

    Dorn, H. C.; And Others

    1984-01-01

    Although polarimeters that illustrate rotation of plane-polarized light by chiral solutions have been previously described, the polarimeter described in this paper has certain advantages when used in conjunction with an overhead projector. Instructions for constructing this polarimeter and its use in demonstrating the optical activity of sugars…

  9. Viewing Vertical Objects with an Overhead Projector.

    ERIC Educational Resources Information Center

    Wild, R. L.

    1988-01-01

    Discusses the use of an overhead projector for the deflection of a vertical image to a screen. Describes three demonstrations: magnetizing of a steel ball bearing and paper clip; convection currents of a hot liquid within a cold liquid; and oscillation of concentrated salt solution into fresh water. (YP)

  10. Optical projectors simulate human eyes to establish operator's field of view

    NASA Technical Reports Server (NTRS)

    Beam, R. A.

    1966-01-01

    Device projects visual pattern limits of the field of view of an operator as his eyes are directed at a given point on a control panel. The device, which consists of two projectors, provides instant evaluation of visual ability at a point on a panel.

  11. The Overhead Projector in the Mathematics Classroom.

    ERIC Educational Resources Information Center

    Lenchner, George

    The first section of this pamphlet illustrates and describes the overhead projector, and discusses several of its advantages over other projection devises, including its simplicity of operation, conservation of class time, dynamic effects, image size, etc. The second section describes in some detail materials and methods used to make visuals, then…

  12. Multispectral linear array visible and shortwave infrared sensors

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; Warren, F. B.; Pellon, L. E.; Strong, R.; Elabd, H.; Cope, A. D.; Hoffmann, D. M.; Kramer, W. M.; Longsderff, R. W.

    1984-08-01

    All-solid state pushbroom sensors for multispectral linear array (MLA) instruments to replace mechanical scanners used on LANDSAT satellites are introduced. A buttable, four-spectral-band, linear-format charge coupled device (CCD) and a buttable, two-spectral-band, linear-format, shortwave infrared CCD are described. These silicon integrated circuits may be butted end to end to provide multispectral focal planes with thousands of contiguous, in-line photosites. The visible CCD integrated circuit is organized as four linear arrays of 1024 pixels each. Each array views the scene in a different spectral window, resulting in a four-band sensor. The shortwave infrared (SWIR) sensor is organized as 2 linear arrays of 512 detectors each. Each linear array is optimized for performance at a different wavelength in the SWIR band.

  13. Room-temperature quantum noise limited spectrometry and methods of the same

    DOEpatents

    Stevens, Charles G.; Tringe, Joseph W.; Cunningham, Christopher Thomas

    2014-08-26

    In one embodiment, a heterodyne detection system for detecting light includes a first input aperture adapted for receiving first light from a scene input, a second input aperture adapted for receiving second light from a local oscillator input, a broadband local oscillator adapted for providing the second light to the second input aperture, a dispersive element adapted for dispersing the first light and the second light, and a final condensing lens coupled to an infrared detector. The final condensing lens is adapted for concentrating incident light from a primary condensing lens onto the infrared detector, and the infrared detector is a square-law detector capable of sensing the frequency difference between the first light and the second light. More systems and methods for detecting light are described according to other embodiments.

  14. Room-temperature quantum noise limited spectrometry and methods of the same

    DOEpatents

    Stevens, Charles G.; Tringe, Joseph W.; Cunningham, Christopher T.

    2016-08-02

    In one embodiment, a heterodyne detection system for detecting light includes a first input aperture configured to receive first light from a scene input, a second input aperture configured to receive second light from a local oscillator input, a broadband local oscillator configured to provide the second light to the second input aperture, a dispersive element configured to disperse the first light and the second light, and a final condensing lens coupled to an infrared detector. The final condensing lens is configured to concentrate incident light from a primary condensing lens onto the infrared detector, and the infrared detector is a square-law detector capable of sensing the frequency difference between the first light and the second light. More systems and methods for detecting light are described according to other embodiments.

  15. The remote sensing of algae

    NASA Technical Reports Server (NTRS)

    Thorne, J. F.

    1977-01-01

    State agencies need rapid, synoptic and inexpensive methods for lake assessment to comply with the 1972 Amendments to the Federal Water Pollution Control Act. Low altitude aerial photography may be useful in providing information on algal type and quantity. Photography must be calibrated properly to remove sources of error including airlight, surface reflectance and scene-to-scene illumination differences. A 550-nm narrow wavelength band black and white photographic exposure provided a better correlation to algal biomass than either red or infrared photographic exposure. Of all the biomass parameters tested, depth-integrated chlorophyll a concentration correlated best to remote sensing data. Laboratory-measured reflectance of selected algae indicate that different taxonomic classes of algae may be discriminated on the basis of their reflectance spectra.

  16. Enhancement of time images for photointerpretation

    NASA Technical Reports Server (NTRS)

    Gillespie, A. R.

    1986-01-01

    The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.

  17. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  18. Augmented reality and dynamic infrared thermography for perforator mapping in the anterolateral thigh.

    PubMed

    Cifuentes, Ignacio Javier; Dagnino, Bruno Leonardo; Salisbury, María Carolina; Perez, María Eliana; Ortega, Claudia; Maldonado, Daniela

    2018-05-01

    Dynamic infrared thermography (DIRT) has been used for the preoperative mapping of cutaneous perforators. This technique has shown a positive correlation with intraoperative findings. Our aim was to evaluate the accuracy of perforator mapping with DIRT and augmented reality using a portable projector. For this purpose, three volunteers had both of their anterolateral thighs assessed for the presence and location of cutaneous perforators using DIRT. The obtained image of these "hotspots" was projected back onto the thigh and the presence of Doppler signals within a 10-cm diameter from the midpoint between the lateral patella and the anterior superior iliac spine was assessed using a handheld Doppler device. Hotspots were identified in all six anterolateral thighs and were successfully projected onto the skin. The median number of perforators identified within the area of interest was 5 (range, 3-8) and the median time needed to identify them was 3.5 minutes (range, 3.3-4.0 minutes). Every hotspot was correlated to a Doppler sound signal. In conclusion, augmented reality can be a reliable method for transferring the location of perforators identified by DIRT onto the thigh, facilitating its assessment and yielding a reliable map of potential perforators for flap raising.

  19. System for training and evaluation of security personnel in use of firearms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, H.F.

    This patent describes an interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn anmore » infrared laser handgun from his holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee.« less

  20. System for training and evaluation of security personnel in use of firearms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, H.F.

    An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn an infrared laser handgunmore » from high holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee. 8 figs.« less

  1. System for training and evaluation of security personnel in use of firearms

    DOEpatents

    Hall, Howard F.

    1990-01-01

    An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has (1) drawn an infrared laser handgun from his holster, (2) fired his laser handgun, (3) taken cover, (4) advanced or retreated from the adversary on the screen, and (5) when the adversary has fired his gun at the trainee.

  2. Looking at Images with Human Figures: Comparison between Autistic and Normal Children.

    ERIC Educational Resources Information Center

    van der Geest, J. N.; Kemner, C.; Camfferman, G.; Verbaten, M. N.; van Engeland, H.

    2002-01-01

    In this study, the looking behavior of 16 autistic and 14 non-autistic children toward cartoon-like scenes that included a human figure was measured quantitatively using an infrared eye-tracking device. Fixation behavior of autistic children was similar to that of their age-and IQ-matched normal peers. Results do not support the idea that autistic…

  3. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  4. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  5. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  6. Beyond the cockpit: The visual world as a flight instrument

    NASA Technical Reports Server (NTRS)

    Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.

    1992-01-01

    The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).

  7. The 20-Foot View

    ERIC Educational Resources Information Center

    Bull, Glen; Garofalo, Joe

    2006-01-01

    In higher education, the number of computer projectors in classrooms has doubled every year for the past five years. A similar trend in K?12 education is occurring now that capable classroom projectors have become available for less than $1,000. At the same time, large-screen displays are becoming common in society; a trend being acceleration by a…

  8. Launcher and Transparent Air Table for Use with Overhead Projector

    ERIC Educational Resources Information Center

    Carr, H. Y.; and others

    1969-01-01

    Describes an apparatus designed for quantitative demonstrations of collision experiments. The apparatus consists of a transparent air table and a launching device for projecting two objects simultaneously. It may be used with an overhead projector. The apparatus won third prize in Demonstration Lecture Apparatus in the A.A.P.T. Apparatus…

  9. Perturbative description of the fermionic projector: Normalization, causality, and Furry's theorem

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Tolksdorf, Jürgen

    2014-05-01

    The causal perturbation expansion of the fermionic projector is performed with a contour integral method. Different normalization conditions are analyzed. It is shown that the corresponding light-cone expansions are causal in the sense that they only involve bounded line integrals. For the resulting loop diagrams we prove a generalized Furry theorem.

  10. Eeny, Meeny, Miny, Mo...

    ERIC Educational Resources Information Center

    Montgomery, Malcolm

    2008-01-01

    As technology and teaching requirements continue to evolve, one would think the choice of which data projector to buy would be easier because there now are more products with more capabilities. Yet just the opposite is true: The sheer number of projectors and myriad combinations of available features can be overwhelming, making it really tough to…

  11. Dissolving Carboxylic Acids and Primary Amines on the Overhead Projector

    ERIC Educational Resources Information Center

    Solomon, Sally D.; Rutkowsky, Susan A.

    2010-01-01

    Liquid carboxylic acids (or primary amines) with limited solubility in water are dissolved by addition of aqueous sodium hydroxide (or hydrochloric acid) on the stage of an overhead projector using simple glassware and very small quantities of chemicals. This effective and colorful demonstration can be used to accompany discussions of the…

  12. Geography via the Overhead Projector: Do It This Way, 7.

    ERIC Educational Resources Information Center

    Best, Thomas D.

    This booklet is designed to assist teachers in their use of overhead projectors when teaching geography. With the overhead technique, relationships among patterns can be suggested bit by bit on inexpensive, easily prepared overlays that are projected to sizes appropriate for a particular instructional situation. A general discussion of the…

  13. Creating a Smart Classroom

    ERIC Educational Resources Information Center

    Domermuth, David

    2005-01-01

    This article provides a description of an affordable, smart classroom built for the Technology Department at Appalachian State university. The system consists of three basic components: a home theater combo, a tablet PC, and a digital projector, costing a total of $7,300, or $8,800 if a podium, screen, and projector mount are purchased. The…

  14. A Volumetric Flask as a Projector

    ERIC Educational Resources Information Center

    Limsuwan, P.; Asanithi, P.; Thongpool, V.; Piriyawong, V.; Limsuwan, S.

    2012-01-01

    A lens based on liquid in the confined volume of a volumetric flask was presented as a potential projector to observe microscopic floating organisms or materials. In this experiment, a mosquito larva from a natural pond was selected as a demonstration sample. By shining a light beam from a laser pointer of any visible wavelength through the…

  15. MULTISCALE THERMAL-INFRARED MEASUREMENTS OF THE MAUNA LOA CALDERA, HAWAII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. BALICK; A. GILLESPIE; ET AL

    2001-03-01

    Until recently, most thermal infrared measurements of natural scenes have been made at disparate scales, typically 10{sup {minus}3}-10{sup {minus}2} m (spectra) and 10{sup 2}-10{sup 3} m (satellite images), with occasional airborne images (10{sup 1} m) filling the gap. Temperature and emissivity fields are spatially heterogeneous over a similar range of scales, depending on scene composition. A common problem for the land surface, therefore, has been relating field spectral and temperature measurements to satellite data, yet in many cases this is necessary if satellite data are to be interpreted to yield meaningful information about the land surface. Recently, three new satellitesmore » with thermal imaging capability at the 10{sup 1}-10{sup 2} m scale have been launched: MTI, TERRA, and Landsat 7. MTI acquires multispectral images in the mid-infrared (3-5{micro}m) and longwave infrared (8-10{micro}m) with 20m resolution. ASTER and MODIS aboard TERRA acquire multispectral longwave images at 90m and 500-1000m, respectively, and MODIS also acquires multispectral mid-infrared images. Landsat 7 acquires broadband longwave images at 60m. As part of an experiment to validate the temperature and thermal emissivity values calculated from MTI and ASTER images, we have targeted the summit region of Mauna Loa for field characterization and near-simultaneous satellite imaging, both on daytime and nighttime overpasses, and compare the results to previously acquired 10{sup {minus}1} m airborne images, ground-level multispectral FLIR images, and the field spectra. Mauna Loa was chosen in large part because the 4x6km summit caldera, flooded with fresh basalt in 1984, appears to be spectrally homogeneous at scales between 10{sup {minus}1} and 10{sup 2} m, facilitating the comparison of sensed temperature. The validation results suggest that, with careful atmospheric compensation, it is possible to match ground measurements with measurements from space, and to use the Mauna Loa validation site for cross-comparison of thermal infrared sensors and temperature/emissivity extraction algorithms.« less

  16. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  17. Matched-filtering generalized phase contrast using LCoS pico-projectors for beam-forming.

    PubMed

    Bañas, Andrew; Palima, Darwin; Glückstad, Jesper

    2012-04-23

    We report on a new beam-forming system for generating high intensity programmable optical spikes using so-called matched-filtering Generalized Phase Contrast (mGPC) applying two consumer handheld pico-projectors. Such a system presents a low-cost alternative for optical trapping and manipulation, optical lattices and other beam-shaping applications usually implemented with high-end spatial light modulators. Portable pico-projectors based on liquid crystal on silicon (LCoS) devices are used as binary phase-only spatial light modulators by carefully setting the appropriate polarization of the laser illumination. The devices are subsequently placed into the object and Fourier plane of a standard 4f-setup according to the mGPC spatial filtering configuration. Having a reconfigurable spatial phase filter, instead of a fixed and fabricated one, allows the beam shaper to adapt to different input phase patterns suited for different requirements. Despite imperfections in these consumer pico-projectors, the mGPC approach tolerates phase aberrations that would have otherwise been hard to overcome by standard phase projection. © 2012 Optical Society of America

  18. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  19. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  20. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  1. Sensor performance analysis

    NASA Technical Reports Server (NTRS)

    Montgomery, H. E.; Ostrow, H.; Ressler, G. M.

    1990-01-01

    The theory is described and the equations required to design are developed and the performance of electro-optical sensor systems that operate from the visible through the thermal infrared spectral regions are analyzed. Methods to compute essential optical and detector parameters, signal-to-noise ratio, MTF, and figures of merit such as NE delta rho and NE delta T are developed. A set of atmospheric tables are provided to determine scene radiance in the visible spectral region. The Planck function is used to determine radiance in the infrared. The equations developed were incorporated in a spreadsheet so that a wide variety of sensor studies can be rapidly and efficiently conducted.

  2. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.

  3. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.

  4. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  5. Development of IR imaging system simulator

    NASA Astrophysics Data System (ADS)

    Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu

    2017-02-01

    To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.

  6. On the Duality of Forward and Inverse Light Transport.

    PubMed

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  7. Properties of the Visible Light Phototaxis and UV Avoidance Behaviors in the Larval Zebrafish.

    PubMed

    Guggiana-Nilo, Drago A; Engert, Florian

    2016-01-01

    For many organisms, color is an essential source of information from visual scenes. The larval zebrafish has the potential to be a model for the study of this topic, given its tetrachromatic retina and high dependence on vision. In this study we took a step toward understanding how the larval zebrafish might use color sensing. To this end, we used a projector-based paradigm to force a choice of a color stimulus at every turn of the larva. The stimuli used spanned most of the larval spectral range, including activation of its Ultraviolet (UV) cone, which has not been described behaviorally before. We found that zebrafish larvae swim toward visible wavelengths (>400 nm) when choosing between them and darkness, as has been reported with white light. However, when presented with UV light and darkness zebrafish show an intensity dependent avoidance behavior. This UV avoidance does not interact cooperatively with phototaxis toward longer wavelengths, but can compete against it in an intensity dependent manner. Finally, we show that the avoidance behavior depends on the presence of eyes with functional UV cones. These findings open future avenues for studying the neural circuits that underlie color sensing in the larval zebrafish.

  8. Properties of the Visible Light Phototaxis and UV Avoidance Behaviors in the Larval Zebrafish

    PubMed Central

    Guggiana-Nilo, Drago A.; Engert, Florian

    2016-01-01

    For many organisms, color is an essential source of information from visual scenes. The larval zebrafish has the potential to be a model for the study of this topic, given its tetrachromatic retina and high dependence on vision. In this study we took a step toward understanding how the larval zebrafish might use color sensing. To this end, we used a projector-based paradigm to force a choice of a color stimulus at every turn of the larva. The stimuli used spanned most of the larval spectral range, including activation of its Ultraviolet (UV) cone, which has not been described behaviorally before. We found that zebrafish larvae swim toward visible wavelengths (>400 nm) when choosing between them and darkness, as has been reported with white light. However, when presented with UV light and darkness zebrafish show an intensity dependent avoidance behavior. This UV avoidance does not interact cooperatively with phototaxis toward longer wavelengths, but can compete against it in an intensity dependent manner. Finally, we show that the avoidance behavior depends on the presence of eyes with functional UV cones. These findings open future avenues for studying the neural circuits that underlie color sensing in the larval zebrafish. PMID:27594828

  9. Using the Overhead Projector as a Light Source for Physics Demonstrations

    ERIC Educational Resources Information Center

    Mak, Se-Yuen

    2006-01-01

    This article illustrates how the overhead projector can be used as a light source in some peculiar ways for physics demonstrations. Five examples are included: (1) Study of chromatic aberration; (2) Making giant Newton's rings; (3) Comparison of the rate of heat absorption by different surfaces; (4) Demonstration of greenhouse effect; and (5)…

  10. The Use of the Overhead Projector in Teaching Composition.

    ERIC Educational Resources Information Center

    Bissex, Henry

    The overhead projector, used as a controllable blackboard or bulletin board in the teaching of writing, extends the range of teaching techniques so that an instructor may (1) prepare, in advance, handwritten sheets of film--test questions, pupils' sentences, quotations, short poems--to be shown in any order or form; (2) use pictures, graphics, or…

  11. A closed-loop control-loading system

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Parrish, R. V.

    1979-01-01

    Langley Differential Maneuvering Simulator (DMS) realistically simulates two aircraft operating in differential mode. It consists of two identical fixed-base cockpits and dome projection systems. Each projection system consists of sky/Earth projector and target-image generator and projector. Although programmable control forces are small part of overall system, they play large role in providing pilot with kinesthetic cues.

  12. Design of a Computer-Controlled, Random-Access Slide Projector Interface. Final Report (April 1974 - November 1974).

    ERIC Educational Resources Information Center

    Kirby, Paul J.; And Others

    The design, development, test, and evaluation of an electronic hardware device interfacing a commercially available slide projector with a plasma panel computer terminal is reported. The interface device allows an instructional computer program to select slides for viewing based upon the lesson student situation parameters of the instructional…

  13. Overhead Projector Spectrum of Polymethine Dye: A Physical Chemistry Demonstration.

    ERIC Educational Resources Information Center

    Solomon, Sally; Hur, Chinhyu

    1995-01-01

    Encourages the incorporation into lecture of live experiments that can be predicted or interpreted with abstract models. A demonstration is described where the position of the predominant peak of 1,1'-diethyl-4,4'-cyanine iodide is measured in class using an overhead projector spectrometer, then predicted using the model of a particle in a…

  14. E-Learning Technologies and Adult Education in Nigeria

    ERIC Educational Resources Information Center

    Oguzor, Nkasiobi Silas

    2011-01-01

    The internet has proved to be one of the greatest learning resources available in the 21st Century. Modern education is becoming progressively more dynamic. Internet has helped man to see the other part of the world at the click of a mouse. Various forms of instructional technologies such as the overhead projector, opaque projector, filmstrip and…

  15. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  16. Parallel and patterned optogenetic manipulation of neurons in the brain slice using a DMD-based projector.

    PubMed

    Sakai, Seiichiro; Ueno, Kenichi; Ishizuka, Toru; Yawo, Hiromu

    2013-01-01

    Optical manipulation technologies greatly advanced the understanding of the neuronal network and its dysfunctions. To achieve patterned and parallel optical switching, we developed a microscopic illumination system using a commercial DMD-based projector and a software program. The spatiotemporal patterning of the system was evaluated using acute slices of the hippocampus. The neural activity was optically manipulated, positively by the combination of channelrhodopsin-2 (ChR2) and blue light, and negatively by the combination of archaerhodopsin-T (ArchT) and green light. It is suggested that our projector-managing optical system (PMOS) would effectively facilitate the optogenetic analyses of neurons and their circuits. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  17. Design and test of a simulation system for autonomous optic-navigated planetary landing

    NASA Astrophysics Data System (ADS)

    Cai, Sheng; Yin, Yanhe; Liu, Yanjun; He, Fengyun

    2018-02-01

    In this paper, a simulation system based on commercial projector is proposed to test the optical navigation algorithms for autonomous planetary landing in laboratorial scenarios. The design work of optics, mechanics and synchronization control are carried out. Furthermore, the whole simulation system is set up and tested. Through the calibration of the system, two main problems, synchronization between the projector and CCD and pixel-level shifting caused by the low repeatability of DMD used in the projector, are settled. The experimental result shows that the RMS errors of pitch, yaw and roll angles are 0.78', 0.48', and 2.95' compared with the theoretical calculation, which can fulfill the requirement of experimental simulation for planetary landing in laboratory.

  18. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  19. An efficient framework for modeling clouds from Landsat8 images

    NASA Astrophysics Data System (ADS)

    Yuan, Chunqiang; Guo, Jing

    2015-03-01

    Cloud plays an important role in creating realistic outdoor scenes for video game and flight simulation applications. Classic methods have been proposed for cumulus cloud modeling. However, these methods are not flexible for modeling large cloud scenes with hundreds of clouds in that the user must repeatedly model each cloud and adjust its various properties. This paper presents a meteorologically based method to reconstruct cumulus clouds from high resolution Landsat8 satellite images. From these input satellite images, the clouds are first segmented from the background. Then, the cloud top surface is estimated from the temperature of the infrared image. After that, under a mild assumption of flat base for cumulus cloud, the base height of each cloud is computed by averaging the top height for pixels on the cloud edge. Then, the extinction is generated from the visible image. Finally, we enrich the initial shapes of clouds using a fractal method and represent the recovered clouds as a particle system. The experimental results demonstrate our method can yield realistic cloud scenes resembling those in the satellite images.

  20. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  1. Seasat views North America, the Caribbean, and Western Europe with imaging radar

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Blom, R. G.; Bryan, M. L.; Daily, M.; Dixon, T. H.; Elachi, C.; Xenos, E. C.

    1980-01-01

    Forty-one digitally correlated Seasat synthetic-aperture radar images of land areas in North America, the Caribbean, and Western Europe are presented to demonstrate this microwave orbital imagery. The characteristics of the radar images, the types of information that can be extracted from them, and certain of their inherent distortions are briefly described. Each atlas scene covers an area of 90 X 90 kilometers, with the exception of the one that is the Nation's Capital. The scenes are grouped according to salient features of geology, hydrology and water resources, urban landcover, or agriculture. Each radar image is accompanied by a corresponding image in the optical or near-infrared range, or by a simple sketch map to illustrate features of interest. Characteristics of the Seasat radar imaging system are outlined.

  2. Proceedings of the Eleventh International Symposium on Remote Sensing of Environment, volume 2. [application and processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Application and processing of remotely sensed data are discussed. Areas of application include: pollution monitoring, water quality, land use, marine resources, ocean surface properties, and agriculture. Image processing and scene analysis are described along with automated photointerpretation and classification techniques. Data from infrared and multispectral band scanners onboard LANDSAT satellites are emphasized.

  3. The 27-28 October 1986 FIRE IFO Cirrus Case Study: Cirrus Parameter Relationships Derived from Satellite and Lidar Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Young, David F.; Sassen, Kenneth; Alvarez, Joseph M.; Grund, Christian J.

    1990-01-01

    Cirrus cloud radiative and physical characteristics are determined using a combination of ground-based, aircraft, and satellite measurements taken as part of the FIRE Cirrus Intensive Field Observations (IFO) during October and November 1986. Lidar backscatter data are used with rawinsonde data to define cloud base, center, and top heights and the corresponding temperatures. Coincident GOES 4-km visible (0.65 micro-m) and 8-km infrared window (11.5 micro-m) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance model. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8.0 km for the 71 scenes. Mean vertical beam emittances derived from cloud-center temperatures were 0.62 for all scenes compared to 0.33 for the case study (27-28 October) reflecting the thinner clouds observed for the latter scenes. Relationships between cloud emittance, extinction coefficients, and temperature for the case study are very similar to those derived from earlier surface- based studies. The thicker clouds seen during the other IFO days yield different results. Emittances derived using cloud-top temperature were ratioed to those determined from cloud-center temperature. A nearly linear relationship between these ratios and cloud-center temperature holds promise for determining actual cloud-top temperatures and cloud thicknesses from visible and infrared radiance pairs. The mean ratio of the visible scattering optical depth to the infrared absorption optical depth was 2.13 for these data. This scattering efficiency ratio shows a significant dependence on cloud temperature. Values of mean scattering efficiency as high as 2.6 suggest the presence of small ice particles at temperatures below 230 K. The parameterization of visible reflectance in terms of cloud optical depth and clear-sky reflectance shows promise as a simplified method for interpreting visible satellite data reflected from cirrus clouds. Large uncertainties in the optical parameters due to cloud reflectance anisotropy and shading were found by analyzing data for various solar zenith angles and for simultaneous AVHRR data. Inhomogeneities in the cloud fields result in uneven cloud shading that apparently causes the occurrence of anomalously dark, cloudy pixels in the GOES data. These shading effects complicate the interpretation of the satellite data. The results highlight the need for additional study of cirrus cloud scattering processes and remote sensing techniques.

  4. The 27-28 October 1986 FIRE IFO Cirrus Case Study: Cirrus Parameter Relationships Derived from Satellite and Lidar Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Young, David F.; Sassen, Kenneth; Alvarez, Joseph M.; Grund, Christian J.

    1996-01-01

    Cirrus cloud radiative and physical characteristics are determined using a combination of ground based, aircraft, and satellite measurements taken as part of the First ISCCP Region Experiment (FIRE) cirrus intensive field observations (IFO) during October and November 1986. Lidar backscatter data are used with rawinsonde data to define cloud base, center and top heights and the corresponding temperatures. Coincident GOES-4 4-km visible (0.65 micrometer) and 8-km infrared window (11.5 micrometer) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance model. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8.0 km for the 71 scenes. Mean vertical beam emittances derived from cloud-center temperatures were 062 for all scenes compared to 0.33 for the case study (27-28 October) reflecting the thinner clouds observed for the latter scenes. Relationships between cloud emittance , extinction coefficients, and temperature for the case study are very similar to those derived from earlier surface-based studies. The thicker clouds seen during the other IFO days yield different results. Emittances derived using cloud-top temperature wer ratioed to those determined from cloud-center temperature. A nearly linear relationship between these ratios and cloud-center temperature holds promise for determining actual cloud-top temperature and cloud thickness from visible and infrared radiance pairs. The mean ratio of the visible scattering optical depth to the infrared absorption optical depth was 2.13 for these data. This scattering efficiency ratio shows a significant dependence on cloud temperature. Values of mean scattering efficiency as high as 2.6 suggest the presence of small ice particles at temperatures below 230 K. the parameterization of visible reflectance in terms of cloud optical depth and clear sky reflectance shows promise as a simplified method for interpreting visible satellite data reflected from cirrus clouds. Large uncertainties in the optical parameters due to cloud reflectance anisotropy and shading were found by analyzing data for various solar zenith angles and for simultaneous advanced very high resolution radiometer (AVHRR) data. Inhomogeneities in the cloud fields result in uneven cloud shading that apparently causes the occurrence of anomalously dark, cloud pixels in the GOES data. These shading effects complicate the interpretation of the satellite data. The results highlight the need for additional study or cirrus cloud scattering processes and remote sensing techniques.

  5. Tower testing of a 64W shortwave infrared supercontinuum laser for use as a hyperspectral imaging illuminator

    NASA Astrophysics Data System (ADS)

    Meola, Joseph; Absi, Anthony; Islam, Mohammed N.; Peterson, Lauren M.; Ke, Kevin; Freeman, Michael J.; Ifaraguerri, Agustin I.

    2014-06-01

    Hyperspectral imaging systems are currently used for numerous activities related to spectral identification of materials. These passive imaging systems rely on naturally reflected/emitted radiation as the source of the signal. Thermal infrared systems measure radiation emitted from objects in the scene. As such, they can operate at both day and night. However, visible through shortwave infrared systems measure solar illumination reflected from objects. As a result, their use is limited to daytime applications. Omni Sciences has produced high powered broadband shortwave infrared super-continuum laser illuminators. A 64-watt breadboard system was recently packaged and tested at Wright-Patterson Air Force Base to gauge beam quality and to serve as a proof-of-concept for potential use as an illuminator for a hyperspectral receiver. The laser illuminator was placed in a tower and directed along a 1.4km slant path to various target materials with reflected radiation measured with both a broadband camera and a hyperspectral imaging system to gauge performance.

  6. Maiden flight of the infrared sounder GLORIA

    NASA Astrophysics Data System (ADS)

    Friedl-Vallon, Felix; Gloria-Team

    2013-05-01

    The Gimballed Limb Radiance Imager of the Atmosphere (GLORIA) instrument is an imaging Fourier transform spectrometer that is capable to operate on various high altitude research aircraft and on stratospheric balloons. The instrument is a joint development of the Helmholtz Centers Jülich and Karlsruhe Institute of Technology. GLORIA has flown for the first time in December 2011 on board the Russian Geophysica M55 research aircraft. Atmospheric measurements with GLORIA are possible in limb and nadir geometry. The scientific focus in limb sounding mode is on dynamics, tropopause region, TTL and polar UTLS. The nadir mode is tailored to processes in the troposphere such as biomass burning events and high precision methane measurements. The combination of limb and nadir will combine good spatial resolution in both the troposphere and lower stratosphere. In addition, GLORIA serves as a proof of concept instrument for the candidate ESA Earth explorer mission PREMIER. The GLORIA spectrometer consists of a classical Michelson interferometer combined with an infrared camera. The spectral range of the first instrument version extends from 780 cm-1 to 1400 cm-1 with a spectral resolution of up to 0.075 cm-1. The high speed HgCdTe focal plane array with 256×256 elements allows in the limb mode an extremely high spatial sampling of up to 100 m in the vertical domain. The spectrometer is mounted in a gimballed frame that permits agility in elevational and azimuthal direction, as well as image rotation. Scene acquisition and scene stabilisation are accomplished by a control system based on an inertial measurement unit. Limb scenes can be chosen within 45° and 132° to the flight direction of the aircraft allowing tomographic analysis of sampled air volumes.

  7. [Authentication of Trace Material Evidence in Forensic Science Field with Infrared Microscopic Technique].

    PubMed

    Jiang, Zhi-quan; Hu, Ke-liang

    2016-03-01

    In the field of forensic science, conventional infrared spectral analysis technique is usually unable to meet the detection requirements, because only very a few trace material evidence with diverse shapes and complex compositions, can be extracted from the crime scene. Infrared microscopic technique is developed based on a combination of Fourier-transform infrared spectroscopic technique and microscopic technique. Infrared microscopic technique has a lot of advantages over conventional infrared spectroscopic technique, such as high detection sensitivity, micro-area analysisand nondestructive examination. It has effectively solved the problem of authentication of trace material evidence in the field of forensic science. Additionally, almost no external interference is introduced during measurements by infrared microscopic technique. It can satisfy the special need that the trace material evidence must be reserved for witness in court. It is illustrated in detail through real case analysis in this experimental center that, infrared microscopic technique has advantages in authentication of trace material evidence in forensic science field. In this paper, the vibration features in infrared spectra of material evidences, including paints, plastics, rubbers, fibers, drugs and toxicants, can be comparatively analyzed by means of infrared microscopic technique, in an attempt to provide powerful spectroscopic evidence for qualitative diagnosis of various criminal and traffic accident cases. The experimental results clearly suggest that infrared microscopic technique has an incomparable advantage and it has become an effective method for authentication of trace material evidence in the field of forensic science.

  8. We Have Met Our Past and Our Future: Thanks for the Walk down Memory Lane

    ERIC Educational Resources Information Center

    Wiseman, Robert C.

    2006-01-01

    In this article, the author takes the readers for a walk down memory lane on the use of teaching aids. He shares his experience of the good old days of Audio Visual--opaque projector, motion pictures/films, recorders, and overhead projector. Computers have arrived, and now people can make graphics, pictures, motion pictures, and many different…

  9. A Mobile Mixed-Reality Environment for Children's Storytelling Using a Handheld Projector and a Robot

    ERIC Educational Resources Information Center

    Sugimoto, Masanori

    2011-01-01

    This paper describes a system called GENTORO that uses a robot and a handheld projector for supporting children's storytelling activities. GENTORO differs from many existing systems in that children can make a robot play their own story in a physical space augmented by mixed-reality technologies. Pilot studies have been conducted to clarify the…

  10. Interactive Projector as an Interactive Teaching Tool in the Classroom: Evaluating Teaching Efficiency and Interactivity

    ERIC Educational Resources Information Center

    Liu, Li-Ying; Cheng, Meng-Tzu

    2015-01-01

    This study reports on a measurement that is used to investigate interactivity in the classrooms and examines the impact of integrating the interactive projector into middle school science classes on classroom interactivity and students' biology learning. A total of 126 7th grade Taiwanese students were involved in the study and quasi-experimental…

  11. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  12. The Fermionic Projector, entanglement and the collapse of the wave function

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2011-07-01

    After a brief introduction to the fermionic projector approach, we review how entanglement and second quantized bosonic and fermionic fields can be described in this framework. The constructions are discussed with regard to decoherence phenomena and the measurement problem. We propose a mechanism leading to the collapse of the wave function in the quantum mechanical measurement process.

  13. Projection display technologies for the new millennium

    NASA Astrophysics Data System (ADS)

    Kahn, Frederic J.

    2000-04-01

    Although analog CRTs continue to enable most of the world's electronic projection displays such as US consumer rear projection televisions, discrete pixel (digital) active matrix LCD and DLP reflective mirror array projectors have rapidly created large nonconsumer markets--primarily for business. Recent advances in image quality, compactness and cost effectiveness of digital projectors have the potential to revolutionize major consumer and entertainment markets as well. Digital penetration of the mainstream consumer projection TV market will begin in the hear 2000. By 2005 digital projection HDTVs could take the major share of the consumer HDTV projection market. Digital projection is expected to dominate both the consumer HDTV and the cinema market by 2010, resulting in potential shipments for all projection markets exceeding 10 M units per year. Digital projection is improving at a rate 10X faster than analog CRT projectors and 5X faster than PDP flat panels. Continued rapid improvement of digital projection is expected due to its relative immaturity and due to the wide diversity of technological improvements being pursued. Key technology enablers are the imaging panels, light sources and micro-optics. Market shares of single panel projectors, MEMs panels, LCOS panels and low T p-Si TFT LCD panel variants are expected to increase.

  14. Spectral decomposition of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Gaddis, Lisa; Soderblom, Laurence; Kieffer, Hugh; Becker, Kris; Torson, Jim; Mullins, Kevin

    1993-01-01

    A set of techniques is presented that uses only information contained within a raw Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene to estimate and to remove additive components such as multiple scattering and instrument dark current. Multiplicative components (instrument gain, topographic modulation of brightness, atmospheric transmission) can then be normalized, permitting enhancement, extraction, and identification of relative reflectance information related to surface composition and mineralogy. The technique for derivation of additive-component spectra from a raw AVIRIS scene is an adaption of the 'regression intersection method' of Crippen. This method uses two surface units that are spatially extensive, and located in rugged terrain. For a given wavelength pair, subtraction of the derived additive component from individual band values will remove topography in both regions in a band/band ratio image. Normalization of all spectra in the scene to the average scene spectrum then results in cancellation of multiplicative components and production of a relative-reflectance scene. The resulting AVIRIS product contains relative-reflectance features due to mineral absorption that depart from the average spectrum. These features commonly are extremely weak and difficult to recognize, but they can be enhanced by using two simple 3-D image-processing tools. The validity of these techniques will be demonstrated by comparisons between relative-reflectance AVIRIS spectra and those derived by using JPL standard calibrations. The AVIRIS data used in this analysis were acquired over the Kelso Dunes area (34 deg 55' N, 115 deg 43' W) of the eastern Mojave Desert, CA (in 1987) and the Upheaval Dome area (38 deg 27' N, 109 deg 55' W) of the Canyonlands National Park, UT (in 1991).

  15. DLP technolgy: applications in optical networking

    NASA Astrophysics Data System (ADS)

    Yoder, Lars A.; Duncan, Walter M.; Koontz, Elisabeth M.; So, John; Bartlett, Terry A.; Lee, Benjamin L.; Sawyers, Bryce D.; Powell, Donald; Rancuret, Paul

    2001-11-01

    For the past five years, Digital Light Processing (DLP) technology from Texas Instruments has made significant inroads in the projection display market. With products encompassing the world's smallest data & video projectors, HDTVs, and digital cinema, DLP is an extremely flexible technology. At the heart of these display solutions is Texas Instruments Digital Micromirror Device (DMD), a semiconductor-based light switch array of thousands of individually addressable, tiltable, mirror-pixels. With success of the DMD as a spatial light modulator in the visible regime, the use of DLP technology under the constraints of coherent, infrared light for optical networking applications is being explored. As a coherent light modulator, the DMD device can be used in Dense Wavelength Division Multiplexed (DWDM) optical networks to dynamically manipulate and shape optical signals. This paper will present the fundamentals of using DLP with coherent wavefronts, discuss inherent advantages of the technology, and present several applications for DLP in dynamic optical networks.

  16. Pharmaceutical properties of two ethenzamide-gentisic acid cocrystal polymorphs: Drug release profiles, spectroscopic studies and theoretical calculations.

    PubMed

    Sokal, Agnieszka; Pindelska, Edyta; Szeleszczuk, Lukasz; Kolodziejski, Waclaw

    2017-04-30

    The aim of this study was to evaluate the stability and solubility of the polymorphic forms of the ethenzamide (ET) - gentisic acid (GA) cocrystals during standard technological processes leading to tablet formation, such as compression and excipient addition. In this work two polymorphic forms of pharmaceutical cocrystals (ETGA) were characterized by 13 C and 15 N solid-state nuclear magnetic resonance and Fourier transformed infrared spectroscopy. Spectroscopic studies were supported by gauge including projector augmented wave (GIPAW) calculations of chemical shielding constants.Polymorphs of cocrystals were easily identified and characterized on the basis of solid-state spectroscopic studies. ETGA cocrystals behaviour during direct compressionand tabletting with excipient addition were tested. In order to choose the best tablet composition with suitable properties for the pharmaceutical industry dissolution profile studies of tablets containing polymorphic forms of cocrystals with selected excipients were carried out. Copyright © 2017. Published by Elsevier B.V.

  17. Fringe-shifting single-projector moiré topography application for cotyle implantate abrasion measurement

    NASA Astrophysics Data System (ADS)

    Rössler, Tomáš; Hrabovský, Miroslav; Pluháček, František

    2005-08-01

    The cotyle implantate is abraded in the body of patient and its shape changes. Information about the magnitude of abrasion is contained in the result contour map of the implantate. The locations and dimensions of abraded areas can be computed from the contours deformation. The method called the single-projector moire topography was used for the contour lines determination. The theoretical description of method is given at first. The design of the experimental set-up follows. The light grating projector was developed to realize the periodic structure on the measured surface. The method of fringe-shifting was carried out to increase the data quantity. The description of digital processing applied to the moire grating images is introduced at the end together with the examples of processed images.

  18. Mutually unbiased projectors and duality between lines and bases in finite quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, M.; Vourdas, A., E-mail: a.vourdas@bradford.ac.uk

    2013-10-15

    Quantum systems with variables in the ring Z(d) are considered, and the concepts of weak mutually unbiased bases and mutually unbiased projectors are discussed. The lines through the origin in the Z(d)×Z(d) phase space, are classified into maximal lines (sets of d points), and sublines (sets of d{sub i} points where d{sub i}|d). The sublines are intersections of maximal lines. It is shown that there exists a duality between the properties of lines (resp., sublines), and the properties of weak mutually unbiased bases (resp., mutually unbiased projectors). -- Highlights: •Lines in discrete phase space. •Bases in finite quantum systems. •Dualitymore » between bases and lines. •Weak mutually unbiased bases.« less

  19. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  20. The application of color display techniques for the analysis of Nimbus infrared radiation data

    NASA Technical Reports Server (NTRS)

    Allison, L. J.; Cherrix, G. T.; Ausfresser, H.

    1972-01-01

    A color enhancement system designed for the Applications Technology Satellite (ATS) spin scan experiment has been adapted for the analysis of Nimbus infrared radiation measurements. For a given scene recorded on magnetic tape by the Nimbus scanning radiometers, a virtually unlimited number of color images can be produced at the ATS Operations Control Center from a color selector paper tape input. Linear image interpolation has produced radiation analyses in which each brightness-color interval has a smooth boundary without any mosaic effects. An annotated latitude-longitude gridding program makes it possible to precisely locate geophysical parameters, which permits accurate interpretation of pertinent meteorological, geological, hydrological, and oceanographic features.

  1. Investigation of Cloud Properties and Atmospheric Profiles with MODIS

    NASA Technical Reports Server (NTRS)

    Menzel, Paul; Ackerman, Steve; Moeller, Chris; Gumley, Liam; Strabala, Kathy; Frey, Richard; Prins, Elaine; LaPorte, Dan; Wolf, Walter

    1997-01-01

    The WINter Cloud Experiment (WINCE) was directed and supported by personnel from the University of Wisconsin in January and February. Data sets of good quality were collected by the MODIS Airborne Simulator (MAS) and other instruments on the NASA ER2; they will be used to develop and validate cloud detection and cloud property retrievals over winter scenes (especially over snow). Software development focused on utilities needed for all of the UW product executables; preparations for Version 2 software deliveries were almost completed. A significant effort was made, in cooperation with SBRS and MCST, in characterizing and understanding MODIS PFM thermal infrared performance; crosstalk in the longwave infrared channels continues to get considerable attention.

  2. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  3. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    NASA Astrophysics Data System (ADS)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a polynomial fit Matlab function - LTFC_SCOREF), c) the export of data in different raster formats (i.e. Surfer grd). An interesting example of elaborations of the data produced by ASIRA Tools is the map of the temperature changing rate, which provide remarkable information about the potential migration of fumarole activity. The high efficiency of Matlab in processing matrix data from IR scenes and the flexibility of this code-developing tool proved to be very useful to produce applications to use in volcanic surveillance aimed to monitor the evolution of surface temperatures field in diffuse degassing volcanic areas.

  4. Operation Desert Storm: Evaluation of the Air Campaign.

    DTIC Science & Technology

    1997-06-12

    210Weight of Effort and TOE Platform Comparisons 217 Type of Effort Analysis Appendix IX 22RTreSesrRadar 221 Target Sensor Electro- optical 221 Technologies...DSMAC Digital Scene Matching Area Correlator ELE electrical facilities EO electro- optical EW electronic warfare FLIR forward-looking infrared FOV...the exposure of aircraft to clouds, haze, smoke, and high humidity, thereby impeding IR and electro- optical (EO) sensors and laser designators for

  5. Evaluation and display of polarimetric image data using long-wave cooled microgrid focal plane arrays

    NASA Astrophysics Data System (ADS)

    Bowers, David L.; Boger, James K.; Wellems, L. David; Black, Wiley T.; Ortega, Steve E.; Ratliff, Bradley M.; Fetrow, Matthew P.; Hubbs, John E.; Tyo, J. Scott

    2006-05-01

    Recent developments for Long Wave InfraRed (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array (FPA). Inherent advantages over typical polarimeters include packaging and instantaneous acquisition of thermal and polarimetric information. This allows for real time video of thermal and polarimetric products. The microgrid approach has inherent polarization measurement error due to the spatial sampling of a non-uniform scene, residual pixel to pixel variations in the gain corrected responsivity and in the noise equivalent input (NEI), and variations in the pixel to pixel micro-polarizer performance. The Degree of Linear Polarization (DoLP) is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. Image processing and fusion techniques are used to take advantage of the inherent thermal and polarimetric sensing capability of this FPA, providing additional scene information in real time. Optimal operating conditions are employed to improve FPA uniformity and sensitivity. Data from two DRS Infrared Technologies, L.P. (DRS) microgrid polarizer HgCdTe FPAs are presented. One FPA resides in a liquid nitrogen (LN2) pour filled dewar with a 80°K nominal operating temperature. The other FPA resides in a cryogenic (cryo) dewar with a 60° K nominal operating temperature.

  6. Utilizing ERTS-1 imagery for tectonic analysis through study of the Bighorn Mountains region

    NASA Technical Reports Server (NTRS)

    Hoppin, R. A. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Comparisons of imagery of three seasons, late summer-fall, winter, and spring indicate that for this region fall imagery is the best for overall geologic analysis. Winter scenes with light to moderate snow cover provide excellent topographic detail owing to snow enhancement, lower sun angle, and clarity of the atmosphere. Spring imagery has considerable reduction of tonal contrast owing to the low reflecting heavy green grass cover which subdues lithologic effects; heavy snow cover in the uplands masks topography. Mapping of geologic formations is impractical in most cases. Separation into tonal units can provide some general clues on structure. A given tonal unit can include parts of several geologic formations and different stratigraphic units can have the same tonal signature. Drainage patterns and anomalies provide the most consistent clues for detecting folds, monoclines, and homoclines. Vegetation only locally reflects lithology and structure. False color infrared 9 x 9 transparencies are the most valuable single imagery. Where these can be supplemented by U-2 color infrared for more detailed work, a tremendous amount of information is available. Adequately field checking such a large area just in one scene is the major logistic problem even in a fairly well known region.

  7. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  8. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  9. Reconciling biases and uncertainties of AIRS and MODIS ice cloud properties

    NASA Astrophysics Data System (ADS)

    Kahn, B. H.; Gettelman, A.

    2015-12-01

    We will discuss comparisons of collocated Atmospheric Infrared Sounder (AIRS) and Moderate Resolution Imaging Spectroradiometer (MODIS) ice cloud optical thickness (COT), effective radius (CER), and cloud thermodynamic phase retrievals. The ice cloud comparisons are stratified by retrieval uncertainty estimates, horizontal inhomogeneity at the pixel-scale, vertical cloud structure, and other key parameters. Although an estimated 27% globally of all AIRS pixels contain ice cloud, only 7% of them are spatially uniform ice according to MODIS. We find that the correlations of COT and CER between the two instruments are strong functions of horizontal cloud heterogeneity and vertical cloud structure. The best correlations are found in single-layer, horizontally homogeneous clouds over the low-latitude tropical oceans with biases and scatter that increase with scene complexity. While the COT comparisons are unbiased in homogeneous ice clouds, a bias of 5-10 microns remains in CER within the most homogeneous scenes identified. This behavior is entirely consistent with known sensitivity differences in the visible and infrared bands. We will use AIRS and MODIS ice cloud properties to evaluate ice hydrometeor output from climate model output, such as the CAM5, with comparisons sorted into different dynamical regimes. The results of the regime-dependent comparisons will be described and implications for model evaluation and future satellite observational needs will be discussed.

  10. Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.

    PubMed

    Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur

    2006-06-01

    We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.

  11. A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng

    2017-10-01

    A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.

  12. ASTER's First Views of Red Sea, Ethiopia - Thermal-Infrared (TIR) Image (monochrome)

    NASA Technical Reports Server (NTRS)

    2000-01-01

    ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history.

    The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately.

    The ASTER instrument was built in Japan for the Ministry of International Trade and Industry. A joint United States/Japan Science Team is responsible for instrument design, calibration, and data validation. ASTER is flying on the Terra satellite, which is managed by NASA's Goddard Space Flight Center, Greenbelt, MD.

  13. On orthogonal projectors induced by compact groups and Haar measures

    NASA Astrophysics Data System (ADS)

    Niezgoda, Marek

    2008-02-01

    We study the difference of two orthogonal projectors induced by compact groups of linear operators acting on a vector space. An upper bound for the difference is derived using the Haar measures of the groups. A particular attention is paid to finite groups. Some applications are given for complex matrices and unitarily invariant norms. Majorization inequalities of Fan and Hoffmann and of Causey are rediscovered.

  14. Films for Learning: Some Observations on the Present, Past, and Future Role of the Educational Motion Picture.

    ERIC Educational Resources Information Center

    Flory, John

    Although there have been great developments in motion picture technology, such as super 8mm film, magnetic sound, low cost color film, simpler projectors and movie cameras, and cartridge-loading projectors, there is still only limited use of audiovisual materials in the classroom today. This paper suggests some of the possible reasons for the lack…

  15. The Principle of the Fermionic Projector: An Approach for Quantum Gravity?

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    In this short article we introduce the mathematical framework of the principle of the fermionic projector and set up a variational principle in discrete space-time. The underlying physical principles are discussed. We outline the connection to the continuum theory and state recent results. In the last two sections, we speculate on how it might be possible to describe quantum gravity within this framework.

  16. Perturbative quantum field theory in the framework of the fermionic projector

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2014-04-01

    We give a microscopic derivation of perturbative quantum field theory, taking causal fermion systems and the framework of the fermionic projector as the starting point. The resulting quantum field theory agrees with standard quantum field theory on the tree level and reproduces all bosonic loop diagrams. The fermion loops are described in a different formalism in which no ultraviolet divergences occur.

  17. Development of a universal water signature for the LANDSAT-3 Multispectral Scanner, part 1

    NASA Technical Reports Server (NTRS)

    Schlosser, E. H.

    1980-01-01

    A generalized four channel hyperplane to discriminate water from nonwater was developed using LANDSAT-3 multispectral scaner (MSS) scenes and matching same/next day color infrared aerial photography. The MSS scenes varied in sun elevation angle from 40 to 58 deg. The 28 matching air photo frames contained over 1400 water bodies larger than one surface acre. A preliminary water discriminant, was used to screen the data and eliminate from further consideration all pixels distant from water in MSS spectral space. A linear discriminant was iteratively fitted to the labelled pixels. This discriminant correctly classified 98.7% of the water pixels and 98.6% of the nonwater pixels. The discriminant detected 91.3% of the 414 water bodies over 10 acres in surface area, and misclassified as water 36 groups of contiguous nonwater pixels.

  18. GPU-accelerated iterative reconstruction from Compton scattered data using a matched pair of conic projector and backprojector.

    PubMed

    Nguyen, Van-Giang; Lee, Soo-Jin

    2016-07-01

    Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Projector primary-based optimization for superimposed projection mappings

    NASA Astrophysics Data System (ADS)

    Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.

    2018-01-01

    Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.

  20. In Vivo Near Infrared Virtual Intraoperative Surgical Photoacoustic Optical Coherence Tomography

    PubMed Central

    Lee, Donghyun; Lee, Changho; Kim, Sehui; Zhou, Qifa; Kim, Jeehyun; Kim, Chulhong

    2016-01-01

    Since its first implementation in otolaryngological surgery nearly a century ago, the surgical microscope has improved the accuracy and the safety of microsurgeries. However, the microscope shows only a magnified surface view of the surgical region. To overcome this limitation, either optical coherence tomography (OCT) or photoacoustic microscopy (PAM) has been independently combined with conventional surgical microscope. Herein, we present a near-infrared virtual intraoperative photoacoustic optical coherence tomography (NIR-VISPAOCT) system that combines both PAM and OCT with a conventional surgical microscope. Using optical scattering and absorption, the NIR-VISPAOCT system simultaneously provides surgeons with real-time comprehensive biological information such as tumor margins, tissue structure, and a magnified view of the region of interest. Moreover, by utilizing a miniaturized beam projector, it can back-project 2D cross-sectional PAM and OCT images onto the microscopic view plane. In this way, both microscopic and cross-sectional PAM and OCT images are concurrently displayed on the ocular lens of the microscope. To verify the usability of the NIR-VISPAOCT system, we demonstrate simulated surgeries, including in vivo image-guided melanoma resection surgery and in vivo needle injection of carbon particles into a mouse thigh. The proposed NIR-VISPAOCT system has potential applications in neurosurgery, ophthalmological surgery, and other microsurgeries. PMID:27731390

  1. Augmented reality and dynamic infrared thermography for perforator mapping in the anterolateral thigh

    PubMed Central

    Cifuentes, Ignacio Javier; Dagnino, Bruno Leonardo; Salisbury, María Carolina; Perez, María Eliana; Ortega, Claudia; Maldonado, Daniela

    2018-01-01

    Dynamic infrared thermography (DIRT) has been used for the preoperative mapping of cutaneous perforators. This technique has shown a positive correlation with intraoperative findings. Our aim was to evaluate the accuracy of perforator mapping with DIRT and augmented reality using a portable projector. For this purpose, three volunteers had both of their anterolateral thighs assessed for the presence and location of cutaneous perforators using DIRT. The obtained image of these “hotspots” was projected back onto the thigh and the presence of Doppler signals within a 10-cm diameter from the midpoint between the lateral patella and the anterior superior iliac spine was assessed using a handheld Doppler device. Hotspots were identified in all six anterolateral thighs and were successfully projected onto the skin. The median number of perforators identified within the area of interest was 5 (range, 3–8) and the median time needed to identify them was 3.5 minutes (range, 3.3–4.0 minutes). Every hotspot was correlated to a Doppler sound signal. In conclusion, augmented reality can be a reliable method for transferring the location of perforators identified by DIRT onto the thigh, facilitating its assessment and yielding a reliable map of potential perforators for flap raising. PMID:29788686

  2. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  3. Three-dimensional shape measurement and calibration for fringe projection by considering unequal height of the projector and the camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu Feipeng; Shi Hongjian; Bai Pengxiang

    In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less

  4. Projection Mapping User Interface for Disabled People

    PubMed Central

    Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827

  5. Design of LED projector based on gradient-index lens

    NASA Astrophysics Data System (ADS)

    Qian, Liyong; Zhu, Xiangbing; Cui, Haitian; Wang, Yuanhang

    2018-01-01

    In this study, a new type of projector light path is designed to eliminate the deficits of existing projection systems, such as complex structure and low collection efficiency. Using a three-color LED array as the lighting source, by means of the special optical properties of a gradient-index lens, the complex structure of the traditional projector is simplified. Traditional components, such as the color wheel, relay lens, and mirror, become unnecessary. In this way, traditional problems, such as low utilization of light energy and loss of light energy, are solved. With the help of Zemax software, the projection lens is optimized. The optimized projection lens, LED, gradient-index lens, and digital micromirror device are imported into Tracepro. The ray tracing results show that both the utilization of light energy and the uniformity are improved significantly.

  6. Projection Mapping User Interface for Disabled People.

    PubMed

    Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.

  7. Seamless presentation capture, indexing, and management

    NASA Astrophysics Data System (ADS)

    Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel

    2005-10-01

    Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.

  8. A novel wide-field-of-view display method with higher central resolution for hyper-realistic head dome projector

    NASA Astrophysics Data System (ADS)

    Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko

    2007-02-01

    In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.

  9. [A new age of mass casuality education? : The InSitu project: realistic training in virtual reality environments].

    PubMed

    Lorenz, D; Armbruster, W; Vogelgesang, C; Hoffmann, H; Pattar, A; Schmidt, D; Volk, T; Kubulus, D

    2016-09-01

    Chief emergency physicians are regarded as an important element in the care of the injured and sick following mass casualty accidents. Their education is very theoretical; practical content in contrast often falls short. Limitations are usually the very high costs of realistic (large-scale) exercises, poor reproducibility of the scenarios, and poor corresponding results. To substantially improve the educational level because of the complexity of mass casualty accidents, modified training concepts are required that teach the not only the theoretical but above all the practical skills considerably more intensively than at present. Modern training concepts should make it possible for the learner to realistically simulate decision processes. This article examines how interactive virtual environments are applicable for the education of emergency personnel and how they could be designed. Virtual simulation and training environments offer the possibility of simulating complex situations in an adequately realistic manner. The so-called virtual reality (VR) used in this context is an interface technology that enables free interaction in addition to a stereoscopic and spatial representation of virtual large-scale emergencies in a virtual environment. Variables in scenarios such as the weather, the number wounded, and the availability of resources, can be changed at any time. The trainees are able to practice the procedures in many virtual accident scenes and act them out repeatedly, thereby testing the different variants. With the aid of the "InSitu" project, it is possible to train in a virtual reality with realistically reproduced accident situations. These integrated, interactive training environments can depict very complex situations on a scale of 1:1. Because of the highly developed interactivity, the trainees can feel as if they are a direct part of the accident scene and therefore identify much more with the virtual world than is possible with desktop systems. Interactive, identifiable, and realistic training environments based on projector systems could in future enable a repetitive exercise with changes within a decision tree, in reproducibility, and within different occupational groups. With a hard- and software environment numerous accident situations can be depicted and practiced. The main expense is the creation of the virtual accident scenes. As the appropriate city models and other three-dimensional geographical data are already available, this expenditure is very low compared with the planning costs of a large-scale exercise.

  10. Development of electronic cinema projectors

    NASA Astrophysics Data System (ADS)

    Glenn, William E.

    2001-03-01

    All of the components for the electronic cinema are now commercially available. Sony has a high definition progressively scanned 24 frame per second electronic cinema camera. This can be recorded digitally on tape or film on hard drives in RAID recorders. Much of the post production processing is now done digitally by scanning film, processing it digitally, and recording it on film for release. Fiber links and satellites can transmit cinema program material to theaters in real time. RAID or tape recorders can play programs for viewing at a much lower cost than storage on film. Two companies now have electronic cinema projectors on the market. Of all of the components, the electronic cinema projector is the most challenging. Achieving the resolution, light, output, contrast ratio, and color rendition all at the same time without visible artifacts is a difficult task. Film itself is, of course, a form of light-valve. However, electronically modulated light uses other techniques rather than changes in density to control the light. The optical techniques that have been the basis for many electronic light-valves have been under development for over 100 years. Many of these techniques are based on optical diffraction to modulate the light. This paper will trace the history of these techniques and show how they may be extended to produce electronic cinema projectors in the future.

  11. Projector-Based Augmented Reality for Quality Inspection of Scanned Objects

    NASA Astrophysics Data System (ADS)

    Kern, J.; Weinmann, M.; Wursthorn, S.

    2017-09-01

    After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object's surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector's pose later on for projecting textures with information onto the object's surface. Changes within the projected 3D information or of the projector's pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.

  12. Projection display industry market and technology trends

    NASA Astrophysics Data System (ADS)

    Castellano, Joseph A.; Mentley, David E.

    1995-04-01

    The projection display industry is diverse, embracing a variety of technologies and applications. In recent years, there has been a high level of interest in projection displays, particularly those using LCD panels or light valves because of the difficulty in making large screen, direct view displays. Many developers feel that projection displays will be the wave of the future for large screen HDTV (high-definition television), penetrating the huge existing market for direct view CRT-based televisions. Projection displays can have the images projected onto a screen either from the rear or the front; the main characteristic is their ability to be viewed by more than one person. In addition to large screen home television receivers, there are numerous other uses for projection displays including conference room presentations, video conferences, closed circuit programming, computer-aided design, and military command/control. For any given application, the user can usually choose from several alternative technologies. These include CRT front or rear projectors, LCD front or rear projectors, LCD overhead projector plate monitors, various liquid or solid-state light valve projectors, or laser-addressed systems. The overall worldwide market for projection information displays of all types and for all applications, including home television, will top DOL4.6 billion in 1995 and DOL6.45 billion in 2001.

  13. A method for the real-time construction of a full parallax light field

    NASA Astrophysics Data System (ADS)

    Tanaka, Kenji; Aoki, Soko

    2006-02-01

    We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.

  14. Improved detection probability of low level light and infrared image fusion system

    NASA Astrophysics Data System (ADS)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  15. Infrared fix pattern noise reduction method based on Shearlet Transform

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin

    2018-06-01

    The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.

  16. Mars Rover Opportunity Panorama of Wharton Ridge Enhanced Color

    NASA Image and Video Library

    2016-10-07

    This scene from NASA's Mars Exploration Rover Opportunity shows "Wharton Ridge," which forms part of the southern wall of "Marathon Valley" on the western rim of Endeavour Crater. In this version of the scene the landscape is presented in enhanced color to make differences in surface materials more easily visible The full extent of Wharton Ridge is visible, with the floor of Endeavour Crater beyond it and the far wall of the crater in the distant background. Near the right edge of the scene is "Lewis and Clark Gap," through which Opportunity crossed from Marathon Valley to "Bitterroot Valley" in September 2016. Before the rover departed Marathon Valley, its panoramic camera (Pancam) acquired the component images for this scene on Aug. 30, 2016, during the 4,480th Martian day, or sol, of Opportunity's work on Mars. Opportunity's science team chose the ridge's name to honor the memory of Robert A. Wharton (1951-2012), an astrobiologist who was a pioneer in the use of terrestrial analog environments, particularly in Antarctica, to study scientific problems connected to the habitability of Mars. Over the course of his career, he was a visiting senior scientist at NASA Headquarters, vice president for research at the Desert Research Institute, provost at Idaho State University, and president of the South Dakota School of Mines and Technology. The view spans from east-northeast at left to southeast at right. Color in the scene comes from component images taken through three of the Pancam's color filters, centered on wavelengths of 753 nanometers (near-infrared), 535 nanometers (green) and 432 nanometers (violet). http://photojournal.jpl.nasa.gov/catalog/PIA20850

  17. Synthetic Infrared Scene: Improving the KARMA IRSG Module and Signature Modelling Tool SMAT

    DTIC Science & Technology

    2011-03-01

    d’engagements impliquant des autodirecteurs infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008...infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008 jusqu’à mars 2011. Ce rapport de contrat est axé...74 13 Evaluating Performance Validator tool

  18. Simulated Tank Anti-Armor Gunnery System (STAGS-TOW).

    DTIC Science & Technology

    1983-05-01

    to train TOW gunners. It is derived from a model previously developed for DRAGON. The system employs a terrain board with model enemy armored vehicles ...gunnery training. TOW is a crew-portable, heavy anti-tank weapon designed to attack and defeat armored vehicles and field fortifications. The missile is...a target area, converts the infrared energy to electrical signals and then to visible light and displays the visible light as a real-time scene for

  19. Thermal Imaging for Robotic Applications in Outdoor Scenes

    DTIC Science & Technology

    1990-04-01

    radiation at a particular wavelength A. Thus, the total and monochromatic emissive powers are related by E = E.\\d\\ (2.1) 4 " Radiosity The emissive power...energy is called radiosity . Since there is almost no reflected energy in the infrared wavelength bands used by thermal cameras, the radiosity is the...respectively the monochromatic and total irradiation. In the following chapters, we will use the notions of emissive power (or radiosity ) E, irradiation G

  20. Background characterization techniques for target detection using scene metrics and pattern recognition

    NASA Astrophysics Data System (ADS)

    Noah, Paul V.; Noah, Meg A.; Schroeder, John W.; Chernick, Julian A.

    1990-09-01

    The U.S. Army has a requirement to develop systems for the detection and identification of ground targets in a clutter environment. Autonomous Homing Munitions (AHM) using infrared, visible, millimeter wave and other sensors are being investigated for this application. Advanced signal processing and computational approaches using pattern recognition and artificial intelligence techniques combined with multisensor data fusion have the potential to meet the Army's requirements for next generation ARM.

  1. Mapping the spectral variability in photosynthetic and non-photosynthetic vegetation, soils, and shade using AVIRIS

    NASA Technical Reports Server (NTRS)

    Roberts, Dar A.; Smith, Milton O.; Sabol, Donald E.; Adams, John B.; Ustin, Susan L.

    1992-01-01

    The primary objective of this research was to map as many spectrally distinct types of green vegetation (GV), non-photosynthetic vegetation (NPV), shade, and soil (endmembers) in an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene as is warranted by the spectral variability of the data. Once determined, a secondary objective was to interpret these endmembers and their abundances spatially and spectrally in an ecological context.

  2. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    NASA Astrophysics Data System (ADS)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  3. ASTER Mexicali

    NASA Image and Video Library

    2000-10-06

    Dramatic differences in land use patterns are highlighted in this image of the U.S.-Mexico border. Lush, regularly gridded agricultural fields on the U.S. side contrast with the more barren fields of Mexico. This June 12, 2000, sub-scene combines visible and near infrared bands, displaying vegetation in red. The town of Mexicali-Calexico spans the border in the middle of the image; El Centro, California, is in the upper left. Watered by canals fed from the Colorado River, California's Imperial Valley is one of the country's major fruit and vegetable producers. This image covers an area 24 kilometers (15 miles) wide and 30 kilometers (19 miles) long in three bands of the reflected visible and infrared wavelength region. http://photojournal.jpl.nasa.gov/catalog/PIA02659

  4. Teledyne H1RG, H2RG, and H4RG Noise Generator

    NASA Technical Reports Server (NTRS)

    Rauscher, Bernard J.

    2015-01-01

    This paper describes the near-infrared detector system noise generator (NG) that we wrote for the James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec). NG simulates many important noise components including; (1) white "read noise", (2) residual bias drifts, (3) pink 1/f noise, (4) alternating column noise, and (5) picture frame noise. By adjusting the input parameters, NG can simulate noise for Teledyne's H1RG, H2RG, and H4RG detectors with and without Teledyne's SIDECAR ASIC IR array controller. NG can be used as a starting point for simulating astronomical scenes by adding dark current, scattered light, and astronomical sources into the results from NG. NG is written in Python-3.4.

  5. Preliminary Geologic/spectral Analysis of LANDSAT-4 Thematic Mapper Data, Wind River/bighorn Basin Area, Wyoming

    NASA Technical Reports Server (NTRS)

    Lang, H. R.; Conel, J. E.; Paylor, E. D.

    1984-01-01

    A LIDQA evaluation for geologic applications of a LANDSAT TM scene covering the Wind River/Bighorn Basin area, Wyoming, is examined. This involves a quantitative assessment of data quality including spatial and spectral characteristics. Analysis is concentrated on the 6 visible, near infrared, and short wavelength infrared bands. Preliminary analysis demonstrates that: (1) principal component images derived from the correlation matrix provide the most useful geologic information. To extract surface spectral reflectance, the TM radiance data must be calibrated. Scatterplots demonstrate that TM data can be calibrated and sensor response is essentially linear. Low instrumental offset and gain settings result in spectral data that do not utilize the full dynamic range of the TM system.

  6. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  7. Geometric and radiometric preprocessing of airborne visible/infrared imaging spectrometer (AVIRIS) data in rugged terrain for quantitative data analysis

    NASA Technical Reports Server (NTRS)

    Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.

    1994-01-01

    A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).

  8. Bispectral infrared forest fire detection and analysis using classification techniques

    NASA Astrophysics Data System (ADS)

    Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando

    2004-01-01

    Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.

  9. Sea-based Infrared Radiance Measurements of Ocean and Atmosphere from the ACAPEX/CalWater2 Campaign

    NASA Astrophysics Data System (ADS)

    Gero, P. J.; Knuteson, R.; Hackel, D.; Phillips, C.; Westphall, M.

    2015-12-01

    The ARM Cloud Aerosol Precipitation Experiment (ACAPEX) / CalWater2 was a joint DOE/NOAA field campaign in early 2015 to study atmospheric rivers in the Pacific Ocean and their impacts on the western United States. The campaign goals were to improve understanding and modeling of large-scale dynamics and cloud and precipitation processes associated with atmospheric rivers and aerosol-cloud interactions that influence precipitation variability and extremes in the western United States. Coordinated measurements were made from ground-, aircraft- and sea-based platforms. The second ARM mobile facility (AMF-2) was deployed on board the NOAA Ship Ronald H. Brown for this campaign, which included a new Marine Atmospheric Emitted Radiance Interferometer (M-AERI) to measure the atmospheric downwelling and reflected infrared radiance spectrum at the Earth's surface with high absolute accuracy. The M-AERI measures spectral infrared radiance between 520-3020 cm-1 (3.3-19 μm) at a resolution of 0.5 cm-1. The M-AERI can selectively view the atmospheric scene at zenith, and ocean/atmospheric scenes over a range of ±45° from the horizon. The AERI uses two high-emissivity blackbodies for radiometric calibration, which in conjunction with the instrument design and a suite of rigorous laboratory diagnostics, ensures the radiometric accuracy to be better than 1% (3σ) of the ambient radiance. The M-AERI radiance spectra can be used to retrieve profiles of temperature and water vapor in the troposphere, as well as measurements of trace gases, cloud properties, surface emissivity and ocean skin temperature. We present preliminary results on measurements of ocean skin temperature, ocean emissivity properties as a function of view angle and wind speed, as well as comparisons with radiosondes and satellite observations.

  10. Topological susceptibility from twisted mass fermions using spectral projectors and the gradient flow

    NASA Astrophysics Data System (ADS)

    Alexandrou, Constantia; Athenodorou, Andreas; Cichy, Krzysztof; Constantinou, Martha; Horkel, Derek P.; Jansen, Karl; Koutsou, Giannis; Larkin, Conor

    2018-04-01

    We compare lattice QCD determinations of topological susceptibility using a gluonic definition from the gradient flow and a fermionic definition from the spectral-projector method. We use ensembles with dynamical light, strange and charm flavors of maximally twisted mass fermions. For both definitions of the susceptibility we employ ensembles at three values of the lattice spacing and several quark masses at each spacing. The data are fitted to chiral perturbation theory predictions with a discretization term to determine the continuum chiral condensate in the massless limit and estimate the overall discretization errors. We find that both approaches lead to compatible results in the continuum limit, but the gluonic ones are much more affected by cutoff effects. This finally yields a much smaller total error in the spectral-projector results. We show that there exists, in principle, a value of the spectral cutoff which would completely eliminate discretization effects in the topological susceptibility.

  11. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  12. Quality status display for a vibration welding process

    DOEpatents

    Spicer, John Patrick; Abell, Jeffrey A.; Wincek, Michael Anthony; Chakraborty, Debejyo; Bracey, Jennifer; Wang, Hui; Tavora, Peter W.; Davis, Jeffrey S.; Hutchinson, Daniel C.; Reardon, Ronald L.; Utz, Shawn

    2017-03-28

    A system includes a host machine and a status projector. The host machine is in electrical communication with a collection of sensors and with a welding controller that generates control signals for controlling the welding horn. The host machine is configured to execute a method to thereby process the sensory and control signals, as well as predict a quality status of a weld that is formed using the welding horn, including identifying any suspect welds. The host machine then activates the status projector to illuminate the suspect welds. This may occur directly on the welds using a laser projector, or on a surface of the work piece in proximity to the welds. The system and method may be used in the ultrasonic welding of battery tabs of a multi-cell battery pack in a particular embodiment. The welding horn and welding controller may also be part of the system.

  13. Wide-field depth-sectioning fluorescence microscopy using projector-generated patterned illumination

    NASA Astrophysics Data System (ADS)

    Delica, Serafin; Mar Blanca, Carlo

    2007-10-01

    We present a simple and cost-effective wide-field, depth-sectioning, fluorescence microscope utilizing a commercial multimedia projector to generate excitation patterns on the sample. Highly resolved optical sections of fluorescent pollen grains at 1.9 μm axial resolution are constructed using the structured illumination technique. This requires grid excitation patterns to be scanned across the sample, which is straightforwardly implemented by creating slideshows of gratings at different phases, projecting them onto the sample, and synchronizing camera acquisition with slide transition. In addition to rapid dynamic pattern generation, the projector provides high illumination power and spectral excitation selectivity. We exploit these properties by imaging mouse neural cells in cultures multistained with Alexa 488 and Cy3. The spectral and structural neural information is effectively resolved in three dimensions. The flexibility and commercial availability of this light source is envisioned to open multidimensional imaging to a broader user base.

  14. A subjective evaluation of high-chroma color with wide color-gamut display

    NASA Astrophysics Data System (ADS)

    Kishimoto, Junko; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2009-01-01

    Displays tends to expand its color gamut, such as multi-primary color display, Adobe RGB and so on. Therefore displays got possible to display high chroma colors. However sometimes, we feel unnatural some for the image which only expanded chroma. Appropriate gamut mapping method to expand color gamut is not proposed very much. We are attempting preferred expanded color reproduction on wide color gamut display utilizing high chroma colors effectively. As a first step, we have conducted an experiment to investigate the psychological effect of color schemes including highly saturated colors. We used the six-primary-color projector that we have developed for the presentation of test colors. The six-primary-color projector's gamut volume in CIELAB space is about 1.8 times larger than the normal RGB projector. We conducted a subjective evaluation experiment using the SD (Semantic Differential) technique to find the quantitative psychological effect of high chroma colors.

  15. Invisible Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team

  16. An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface

    NASA Astrophysics Data System (ADS)

    Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc

    2010-12-01

    Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.

  17. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large-scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at a m2 spatial resolution would allow detailed mapping of the near-ground distribution and anthropogenic sources in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on the

  18. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  19. A Martin-Puplett cartridge FIR interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Roger J.; Penniman, Edwin E.; Jarboe, Thomas R.

    2004-10-01

    A compact prealigned Martin-Puplett interferometer (MPI) cartridge for plasma interferometry is described. The MPI cartridge groups all components of a MP interferometer, with the exception of the end mirror for the scene beam, on a stand-alone rigid platform. The interferometer system is completed by positioning a cartridge anywhere along and coaxial with the scene beam, considerably reducing the amount of effort in alignment over a discrete component layout. This allows the interferometer to be expanded to any number of interferometry chords consistent with optical access, limited only by the laser power. The cartridge interferometer has been successfully incorporated as amore » second chord on the Helicity Injected Torus II (HIT-II) far infrared interferometer system and a comparison with the discrete component system is presented. Given the utility and compactness of the cartridge, a possible design for a five-chord interferometer arrangement on the HIT-II device is described.« less

  20. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  1. Current LWIR HSI Remote Sensing Activities at Defence R&D Canada - Valcartier

    DTIC Science & Technology

    2009-10-01

    measures the IR radiation from a target scene which is optically combined onto a single detector out-of-phase with the IR radiation from a corresponding...Hyper-Cam-LW. The MODDIFS project involves the development of a leading edge infrared ( IR ) hyperspectral sensor optimized for the standoff detection...essentially offer the optical subtraction capability of the CATSI system but at high-spatial resolution using an MCT focal plane array of 8484

  2. Varieties of grapheme-colour synaesthesia: a new theory of phenomenological and behavioural differences.

    PubMed

    Ward, Jamie; Li, Ryan; Salih, Shireen; Sagiv, Noam

    2007-12-01

    Recent research has suggested that not all grapheme-colour synaesthetes are alike. One suggestion is that they can be divided, phenomenologically, in terms of whether the colours are experienced in external or internal space (projector-associator distinction). Another suggestion is that they can be divided according to whether it is the perceptual or conceptual attributes of a stimulus that is critical (higher-lower distinction). This study compares the behavioural performance of 7 projector and 7 associator synaesthetes. We demonstrate that this distinction does not map on to behavioural traits expected from the higher-lower distinction. We replicate previous research showing that projectors are faster at naming their synaesthetic colours than veridical colours, and that associators show the reverse profile. Synaesthetes who project colours into external space but not on to the surface of the grapheme behave like associators on this task. In a second task, graphemes presented briefly in the periphery are more likely to elicit reports of colour in projectors than associators, but the colours only tend to be accurate when the grapheme itself is also accurately identified. We propose an alternative model of individual differences in grapheme-colour synaesthesia that emphasises the role of different spatial reference frames in synaesthetic perception. In doing so, we attempt to bring the synaesthesia literature closer to current models of non-synaesthetic perception, attention and binding.

  3. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  4. Identification, definition and mapping of terrestrial ecosystems in interior Alaska

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. A reconstituted, simulated color-infrared print, enlarged to a scale of 1:250,000, was used to make a vegetation map of a 3,110 sq km area just west of Fairbanks, Alaska. Information was traced from the print which comprised the southeastern part of ERTS-1 scene 1033-21011. A 1:1,000,000 scale color-infrared transparency of this scene, obtained from NASA, was used along side the print as an aid in recognizing colors, color intensities and blends, and mosaics of different colors. Color units on the transparency and print were identified according to vegetation types using NASA air photos, U.S. Forest Service air photos, and experience of the investigator. Five more or less pure colors were identified and associated with vegetation types. These colors were designated according to their appearances on the print: (1) orange for forest vegetation dominated by broad-leaved trees: (2) gray for forest vegetation dominated by needle-leaved trees; (3) violet for scrub vegetation; (4) light violet denoting herbaceous tundra vegetation; and (5) dull violet for muskeg vegetation. This study has shown, through close examinations of the NASA transparency, that much more detailed vegetation landscape, or ecosystem maps could be produced, if only spectral signatures could be consistently and reliably recognized and transferred to a map of suitable scale.

  5. Cloud properties inferred from 8-12 micron data

    NASA Technical Reports Server (NTRS)

    Strabala, Kathleen I.; Ackerman, Steven A.; Menzel, W. Paul

    1994-01-01

    A trispectral combination of observations at 8-, 11-, and 12-micron bands is suggested for detecting cloud and cloud properties in the infrared. Atmospheric ice and water vapor absorption peak in opposite halves of the window region so that positive 8-minus-11-micron brightness temperature differences indicate cloud, while near-zero or negative differences indicate clear regions. The absorption coefficient for water increases more between 11 and 12 microns than between 8 and 11 microns, while for ice, the reverse is true. Cloud phases is determined by a scatter diagram of 8-minus-11-micron versus 11-minus-12-micron brightness temperature differences; ice cloud shows a slope greater than 1 and water cloud less than 1. The trispectral brightness temperature method was tested upon high-resolution interferometer data resulting in clear-cloud and cloud-phase delineation. Simulations using differing 8-micron bandwidths revealed no significant degradation of cloud property detection. Thus, the 8-micron bandwidth for future satellites can be selected based on the requirements of other applications, such as surface characterization studies. Application of the technique to current polar-orbiting High-Resolution Infrared Sounder (HIRS)-Advanced Very High Resolution Radiometer (AVHRR) datasets is constrained by the nonuniformity of the cloud scenes sensed within the large HIRS field of view. Analysis of MAS (MODIS Airborne Simulator) high-spatial resolution (500 m) data with all three 8-, 11-, and 12-micron bands revealed sharp delineation of differing cloud and background scenes, from which a simple automated threshold technique was developed. Cloud phase, clear-sky, and qualitative differences in cloud emissivity and cloud height were identified on a case study segment from 24 November 1991, consistent with the scene. More rigorous techniques would allow further cloud parameter clarification. The opportunities for global cloud delineation with the Moderate-Resolution Imaging Spectrometer (MODIS) appear excellent. The spectral selection, the spatial resolution, and the global coverage are all well suited for significant advances.

  6. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  7. Infrared spectroscopy and spectroscopic imaging in forensic science.

    PubMed

    Ewing, Andrew V; Kazarian, Sergei G

    2017-01-16

    Infrared spectroscopy and spectroscopic imaging, are robust, label free and inherently non-destructive methods with a high chemical specificity and sensitivity that are frequently employed in forensic science research and practices. This review aims to discuss the applications and recent developments of these methodologies in this field. Furthermore, the use of recently emerged Fourier transform infrared (FT-IR) spectroscopic imaging in transmission, external reflection and Attenuated Total Reflection (ATR) modes are summarised with relevance and potential for forensic science applications. This spectroscopic imaging approach provides the opportunity to obtain the chemical composition of fingermarks and information about possible contaminants deposited at a crime scene. Research that demonstrates the great potential of these techniques for analysis of fingerprint residues, explosive materials and counterfeit drugs will be reviewed. The implications of this research for the examination of different materials are considered, along with an outlook of possible future research avenues for the application of vibrational spectroscopic methods to the analysis of forensic samples.

  8. Stripe nonuniformity correction for infrared imaging system based on single image optimization

    NASA Astrophysics Data System (ADS)

    Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai

    2018-06-01

    Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.

  9. Infrared Thermography Approach for Effective Shielding Area of Field Smoke Based on Background Subtraction and Transmittance Interpolation.

    PubMed

    Tang, Runze; Zhang, Tonglai; Chen, Yongpeng; Liang, Hao; Li, Bingyang; Zhou, Zunning

    2018-05-06

    Effective shielding area is a crucial indicator for the evaluation of the infrared smoke-obscuring effectiveness on the battlefield. The conventional methods for assessing the shielding area of the smoke screen are time-consuming and labor intensive, in addition to lacking precision. Therefore, an efficient and convincing technique for testing the effective shielding area of the smoke screen has great potential benefits in the smoke screen applications in the field trial. In this study, a thermal infrared sensor with a mid-wavelength infrared (MWIR) range of 3 to 5 μm was first used to capture the target scene images through clear as well as obscuring smoke, at regular intervals. The background subtraction in motion detection was then applied to obtain the contour of the smoke cloud at each frame. The smoke transmittance at each pixel within the smoke contour was interpolated based on the data that was collected from the image. Finally, the smoke effective shielding area was calculated, based on the accumulation of the effective shielding pixel points. One advantage of this approach is that it utilizes only one thermal infrared sensor without any other additional equipment in the field trial, which significantly contributes to the efficiency and its convenience. Experiments have been carried out to demonstrate that this approach can determine the effective shielding area of the field infrared smoke both practically and efficiently.

  10. Infrared thermography: A non-invasive window into thermal physiology.

    PubMed

    Tattersall, Glenn J

    2016-12-01

    Infrared thermography is a non-invasive technique that measures mid to long-wave infrared radiation emanating from all objects and converts this to temperature. As an imaging technique, the value of modern infrared thermography is its ability to produce a digitized image or high speed video rendering a thermal map of the scene in false colour. Since temperature is an important environmental parameter influencing animal physiology and metabolic heat production an energetically expensive process, measuring temperature and energy exchange in animals is critical to understanding physiology, especially under field conditions. As a non-contact approach, infrared thermography provides a non-invasive complement to physiological data gathering. One caveat, however, is that only surface temperatures are measured, which guides much research to those thermal events occurring at the skin and insulating regions of the body. As an imaging technique, infrared thermal imaging is also subject to certain uncertainties that require physical modelling, which is typically done via built-in software approaches. Infrared thermal imaging has enabled different insights into the comparative physiology of phenomena ranging from thermogenesis, peripheral blood flow adjustments, evaporative cooling, and to respiratory physiology. In this review, I provide background and guidelines for the use of thermal imaging, primarily aimed at field physiologists and biologists interested in thermal biology. I also discuss some of the better known approaches and discoveries revealed from using thermal imaging with the objective of encouraging more quantitative assessment. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  12. Fast, accurate, small-scale 3D scene capture using a low-cost depth sensor

    PubMed Central

    Carey, Nicole; Nagpal, Radhika; Werfel, Justin

    2017-01-01

    Commercially available depth sensing devices are primarily designed for domains that are either macroscopic, or static. We develop a solution for fast microscale 3D reconstruction, using off-the-shelf components. By the addition of lenses, precise calibration of camera internals and positioning, and development of bespoke software, we turn an infrared depth sensor designed for human-scale motion and object detection into a device with mm-level accuracy capable of recording at up to 30Hz. PMID:28758159

  13. Projection display market trends

    NASA Astrophysics Data System (ADS)

    Mentley, David E.

    1997-05-01

    The projection display industry is now a multi-billion dollar market comprising an expanding variety of technologies and applications. Growth is being driven by a combination of high volume consumer products and high value business demand. After many years of marginal, but steady performance improvements, essentially all types of projectors have crossed the threshold of acceptability and are now facing accelerated continuing growth. Overall worldwide unit sales of all types of projection displays for all applications will nearly double from 1.6 million units in 1996 to 2.8 million units in 2002. By value at the end user price, the global projector market will grow modestly from 6.3 billion dollars in 1996 to 7.7 billion dollars in 2002. Consumer television will represent the largest share of unit consumption over this time period; in 1996, this application represents 72 percent of the total unit volume. The second major application category for projection displays is the business or presentation projector, representing only 14 percent of the unit shipment total in 1996, but 50 percent of the value.

  14. Full-color laser cathode ray tube (L-CRT) projector

    NASA Astrophysics Data System (ADS)

    Kozlovskiy, Vladimir; Nasibov, Alexander S.; Popov, Yuri M.; Reznikov, Parvel V.; Skasyrsky, Yan K.

    1995-04-01

    A full color TV projector based on three laser cathode-ray tubes (L-CRT) is described. A water-cooled laser screen (LS) is the radiation element of the L-CRT. We have produced three main colors (blue, green and red) by using the LS made of three II-VI compounds: ZnSe ((lambda) equals 475 nm), CdS ((lambda) equals 530 nm) and ZnCdSe (630 nm). The total light flow reaches 1500 Lm, and the number of elements per line is not less than 1000. The LS efficiency may be about 10 Lm/W. In our experiments we have tested new electron optics: - (30 - 37) kV are applied to the cathode unit of the electron gun; the anode of the e-gun and the e-beam intensity modulator are under low potential; the LS has a potential + (30 - 37) kV. The accelerating voltage is divided into two parts, and this enables us to diminish the size and weight of the projector.

  15. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  16. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  17. Reporting guide for laser-light shows and displays (21 CFR 1002)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The guide is to be used for reporting laser-light shows or displays incorporating Class IIIb or Class IV lasers only. Separate reports are not required for shows or displays that incorporate Class I, IIa, II, or IIIa laser-projection systems. Such show descriptions must be included in the user instructions and the report for the laser projector. Laser projectors used in any light shows or displays regardless of the class of the projector must be certified by the manufacturer and reported using the guide titled, Guide for Preparing Initial Reports and Model Change Reports on Lasers and Products Containing Lasers, HHSmore » Publication FDA 86-8259. These guides assist manufacturers in providing the information that the Center for Devices and Radiological Health (CDRH) needs to determine how laser-light-shown projections and laser-light shows comply with the Federal standard for laser products (21 CDR 1040.10 and 1040.11) and with the conditions of an approved variance.« less

  18. Recent advancements in system design for miniaturized MEMS-based laser projectors

    NASA Astrophysics Data System (ADS)

    Scholles, M.; Frommhagen, K.; Gerwig, Ch.; Knobbe, J.; Lakner, H.; Schlebusch, D.; Schwarzenberg, M.; Vogel, U.

    2008-02-01

    Laser projection systems that use the flying spot principle and which are based on a single MEMS micro scanning mirrors are a very promising way to build ultra-compact projectors that may fit into mobile devices. First demonstrators that show the feasibility of this approach and the applicability of the micro scanning mirror developed by Fraunhofer IPMS for these systems have already been presented. However, a number of items still have to be resolved until miniaturized laser projectors are ready for the market. This contribution describes progress on several different items, each of them of major importance for laser projection systems. First of all, the overall performance of the system has been increased from VGA resolution to SVGA (800×600 pixels) with easy connection to a PC via DVI interface or by using the projector as embedded system with direct camera interface. Secondly, the degree of integration of the electronics has been enhanced by design of an application specific analog front end IC for the micro scanning mirror. It has been fabricated in a special high voltage technology and does not only allow to generate driving signals for the scanning mirror with amplitudes of up to 200V but also integrates position detection of the mirror by several methods. Thirdly, first results concerning Speckle reduction have been achieved, which is necessary for generation of images with high quality. Other aspects include laser modulation and solutions regarding projection on tilted screens which is possible because of the unlimited depth of focus.

  19. Liquid crystal light valve technologies for display applications

    NASA Astrophysics Data System (ADS)

    Kikuchi, Hiroshi; Takizawa, Kuniharu

    2001-11-01

    The liquid crystal (LC) light valve, which is a spatial light modulator that uses LC material, is a very important device in the area of display development, image processing, optical computing, holograms, etc. In particular, there have been dramatic developments in the past few years in the application of the LC light valve to projectors and other display technologies. Various LC operating modes have been developed, including thin film transistors, MOS-FETs and other active matrix drive techniques to meet the requirements for higher resolution, and substantial improvements have been achieved in the performance of optical systems, resulting in brighter display images. Given this background, the number of applications for the LC light valve has greatly increased. The resolution has increased from QVGA (320 x 240) to QXGA (2048 x 1536) or even super- high resolution of eight million pixels. In the area of optical output, projectors of 600 to 13,000 lm are now available, and they are used for presentations, home theatres, electronic cinema and other diverse applications. Projectors using the LC light valve can display high- resolution images on large screens. They are now expected to be developed further as part of hyper-reality visual systems. This paper provides an overview of the needs for large-screen displays, human factors related to visual effects, the way in which LC light valves are applied to projectors, improvements in moving picture quality, and the results of the latest studies that have been made to increase the quality of images and moving images or pictures.

  20. Superluminescent light emitting diodes: the best out of two worlds

    NASA Astrophysics Data System (ADS)

    Rossetti, M.; Napierala, J.; Matuschek, N.; Achatz, U.; Duelk, M.; Vélez, C.; Castiglia, A.; Grandjean, N.; Dorsaz, J.; Feltin, E.

    2012-03-01

    Since pico-projectors were starting to become the next electronic "must-have" gadget, the experts were discussing which light-source technology seems to be the best for the existing three major projection approaches for the optical scanning module such as digital light processing, liquid crystal on silica and laser beam steering. Both so-far used light source technologies have distinct advantages and disadvantages. Though laser-based pico-projectors are focus-free and deliver a wider color gamut, their major disadvantages are speckle noise, cost and safety issues. In contrast, projectors based on cheaper Light Emitting Diodes (LEDs) as light source are criticized for a lack of brightness and for having limited focus. Superluminescent Light Emitting Diodes (SLEDs) are temporally incoherent and spatially coherent light sources merging in one technology the advantages of both Laser Diodes (LDs) and LEDs. With almost no visible speckle noise, focus-free operation and potentially the same color gamut than LDs, SLEDs could potentially answer the question which light source to use in future projector applications. In this quest for the best light source, we realized visible SLEDs emitting both in the red and blue spectral region. While the technology required for the realization of red emitters is already well established, III-nitride compounds required for blue emission have experienced a major development only in relatively recent times and the technology is still under development. The present paper is a review of the status of development reached for the blue superluminescent diodes based on the GaN material system.

  1. Real-time classification of vehicles by type within infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.

    2016-10-01

    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

  2. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  3. Identification of recently handled materials by analysis of latent human fingerprints using infrared spectromicroscopy.

    PubMed

    Grant, Ashleigh; Wilkinson, T J; Holman, Derek R; Martin, Michael C

    2005-09-01

    Analysis of fingerprints has predominantly focused on matching the pattern of ridges to a specific person as a form of identification. The present work focuses on identifying extrinsic materials that are left within a person's fingerprint after recent handling of such materials. Specifically, we employed infrared spectromicroscopy to locate and positively identify microscopic particles from a mixture of common materials in the latent human fingerprints of volunteer subjects. We were able to find and correctly identify all test substances based on their unique infrared spectral signatures. Spectral imaging is demonstrated as a method for automating recognition of specific substances in a fingerprint. We also demonstrate the use of attenuated total reflectance (ATR) and synchrotron-based infrared spectromicroscopy for obtaining high-quality spectra from particles that were too thick or too small, respectively, for reflection/absorption measurements. We believe the application of this rapid, nondestructive analytical technique to the forensic study of latent human fingerprints has the potential to add a new layer of information available to investigators. Using fingerprints to not only identify who was present at a crime scene, but also to link who was handling key materials, will be a powerful investigative tool.

  4. Infrared Thermal Imaging System on a Mobile Phone

    PubMed Central

    Lee, Fu-Feng; Chen, Feng; Liu, Jing

    2015-01-01

    A novel concept towards pervasively available low-cost infrared thermal imaging system lunched on a mobile phone (MTIS) was proposed and demonstrated in this article. Through digestion on the evolutional development of milestone technologies in the area, it can be found that the portable and low-cost design would become the main stream of thermal imager for civilian purposes. As a representative trial towards this important goal, a MTIS consisting of a thermal infrared module (TIM) and mobile phone with embedded exclusive software (IRAPP) was presented. The basic strategy for the TIM construction is illustrated, including sensor adoption and optical specification. The user-oriented software was developed in the Android environment by considering its popularity and expandability. Computational algorithms with non-uniformity correction and scene-change detection are established to optimize the imaging quality and efficiency of TIM. The performance experiments and analysis indicated that the currently available detective distance for the MTIS is about 29 m. Furthermore, some family-targeted utilization enabled by MTIS was also outlined, such as sudden infant death syndrome (SIDS) prevention, etc. This work suggests a ubiquitous way of significantly extending thermal infrared image into rather wide areas especially health care in the coming time. PMID:25942639

  5. Optical characterization of the InFocus TVT-6000 liquid crystal television (LCTV) using custom drive electronics

    NASA Astrophysics Data System (ADS)

    Duffey, Jason N.; Jones, Brian K.; Loudin, Jeffrey A.; Booth, Joseph J.

    1995-03-01

    Liquid crystal televisions are popular low-cost spatial light modulators. One LCTV of interest is found in the InFocus TVT-6000 television projector. A wavefront splitting interferometer has been constructed and analyzed for measuring the complex characteristics of these modulators, including phase and amplitude coupling. The results of this evaluation using the TVT-6000 projector drive electronics have been presented in a previous work. This work will present results of the complex characterizations of these modulators using custom drive electronics.

  6. Apollo - LOLA Project

    NASA Image and Video Library

    1961-12-05

    Project LOLA. Test subject sitting at the controls: Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly 2 million dollars. James Hansen wrote: This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled. (p. 379) Ellis J. White wrote in his paper, Discussion of Three Typical Langley Research Center Simulation Programs : A typical mission would start with the first cart positioned on model 1 for the translunar approach and orbit establishment. After starting the descent, the second cart is readied on model 2 and, at the proper time, when superposition occurs, the pilot s scene is switched from model 1 to model 2. then cart 1 is moved to and readied on model 3. The procedure continues until an altitude of 150 feet is obtained. The cabin of the LM vehicle has four windows which represent a 45 degree field of view. The projection screens in front of each window represent 65 degrees which allows limited head motion before the edges of the display can be seen. The lunar scene is presented to the pilot by rear projection on the screens with four Schmidt television projectors. The attitude orientation of the vehicle is represented by changing the lunar scene through the portholes determined by the scan pattern of four orthicons. The stars are front projected onto the upper three screens with a four-axis starfield generation (starball) mounted over the cabin and there is a separate starball for the low window. -- Published in James R. Hansen, Spaceflight Revolution: NASA Langley Research Center From Sputnik to Apollo, (Washington: NASA, 1995), p. 379 Ellis J. White, Discussion of Three Typical Langley Research Center Simulation Programs, Paper presented at the Eastern Simulation Council (EAI s Princeton Computation Center), Princeton, NJ, October 20, 1966.

  7. Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley

    NASA Image and Video Library

    1961-07-23

    Test subject sitting at the controls: Project LOLA or Lunar Orbit and Landing Approach was a simulator built at Langley to study problems related to landing on the lunar surface. It was a complex project that cost nearly $2 million dollars. James Hansen wrote: "This simulator was designed to provide a pilot with a detailed visual encounter with the lunar surface; the machine consisted primarily of a cockpit, a closed-circuit TV system, and four large murals or scale models representing portions of the lunar surface as seen from various altitudes. The pilot in the cockpit moved along a track past these murals which would accustom him to the visual cues for controlling a spacecraft in the vicinity of the moon. Unfortunately, such a simulation--although great fun and quite aesthetic--was not helpful because flight in lunar orbit posed no special problems other than the rendezvous with the LEM, which the device did not simulate. Not long after the end of Apollo, the expensive machine was dismantled." (p. 379) Ellis J. White further described this simulator in his paper , "Discussion of Three Typical Langley Research Center Simulation Programs," (Paper presented at the Eastern Simulation Council (EAI's Princeton Computation Center), Princeton, NJ, October 20, 1966.) "A typical mission would start with the first cart positioned on model 1 for the translunar approach and orbit establishment. After starting the descent, the second cart is readied on model 2 and, at the proper time, when superposition occurs, the pilot's scene is switched from model 1 to model 2. then cart 1 is moved to and readied on model 3. The procedure continues until an altitude of 150 feet is obtained. The cabin of the LM vehicle has four windows which represent a 45 degree field of view. The projection screens in front of each window represent 65 degrees which allows limited head motion before the edges of the display can be seen. The lunar scene is presented to the pilot by rear projection on the screens with four Schmidt television projectors. The attitude orientation of the vehicle is represented by changing the lunar scene through the portholes determined by the scan pattern of four orthicons. The stars are front projected onto the upper three screens with a four-axis starfield generation (starball) mounted over the cabin and there is a separate starball for the low window." -- Published in James R. Hansen, Spaceflight Revolution: NASA Langley Research Center From Sputnik to Apollo, (Washington: NASA, 1995), p. 379.

  8. A cloud detection scheme for the Chinese Carbon Dioxide Observation Satellite (TANSAT)

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Guo, Zheng; Huang, Yipeng; Fan, Hongjie; Li, Wanbiao

    2017-01-01

    Cloud detection is an essential preprocessing step for retrieving carbon dioxide from satellite observations of reflected sunlight. During the pre-launch study of the Chinese Carbon Dioxide Observation Satellite (TANSAT), a cloud-screening scheme was presented for the Cloud and Aerosol Polarization Imager (CAPI), which only performs measurements in five channels located in the visible to near-infrared regions of the spectrum. The scheme for CAPI, based on previous cloudscreening algorithms, defines a method to regroup individual threshold tests for each pixel in a scene according to the derived clear confidence level. This scheme is proven to be more effective for sensors with few channels. The work relies upon the radiance data from the Visible and Infrared Radiometer (VIRR) onboard the Chinese FengYun-3A Polar-orbiting Meteorological Satellite (FY-3A), which uses four wavebands similar to that of CAPI and can serve as a proxy for its measurements. The scheme has been applied to a number of the VIRR scenes over four target areas (desert, snow, ocean, forest) for all seasons. To assess the screening results, comparisons against the cloud-screening product from MODIS are made. The evaluation suggests that the proposed scheme inherits the advantages of schemes described in previous publications and shows improved cloud-screening results. A seasonal analysis reveals that this scheme provides better performance during warmer seasons, except for observations over oceans, where results are much better in colder seasons.

  9. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  10. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  11. Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.

    2000-06-01

    The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.

  12. 3D thermography in non-destructive testing of composite structures

    NASA Astrophysics Data System (ADS)

    Hellstein, Piotr; Szwedo, Mariusz

    2016-12-01

    The combination of 3D scanners and infrared cameras has lead to the introduction of 3D thermography. Such analysis produces results in the form of three-dimensional thermograms, where the temperatures are mapped on a 3D model reconstruction of the inspected object. All work in the field of 3D thermography focused on its utility in passive thermography inspections. The authors propose a new real-time 3D temperature mapping method, which for the first time can be applied to active thermography analyses. All steps required to utilise 3D thermography are discussed, starting from acquisition of three-dimensional and infrared data, going through image processing and scene reconstruction, finishing with thermal projection and ray-tracing visualisation techniques. The application of the developed method was tested during diagnosis of several industrial composite structures—boats, planes and wind turbine blades.

  13. The AEDC aerospace chamber 7V: An advanced test capability for infrared surveillance and seeker sensors

    NASA Technical Reports Server (NTRS)

    Simpson, W. R.

    1994-01-01

    An advanced sensor test capability is now operational at the Air Force Arnold Engineering Development Center (AEDC) for calibration and performance characterization of infrared sensors. This facility, known as the 7V, is part of a broad range of test capabilities under development at AEDC to provide complete ground test support to the sensor community for large-aperture surveillance sensors and kinetic kill interceptors. The 7V is a state-of-the-art cryo/vacuum facility providing calibration and mission simulation against space backgrounds. Key features of the facility include high-fidelity scene simulation with precision track accuracy and in-situ target monitoring, diffraction limited optical system, NIST traceable broadband and spectral radiometric calibration, outstanding jitter control, environmental systems for 20 K, high-vacuum, low-background simulation, and an advanced data acquisition system.

  14. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  15. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration

    NASA Technical Reports Server (NTRS)

    Everett, J. R. (Principal Investigator)

    1983-01-01

    Two fully processed test tapes were enhanced and evaluated at scales up to 1:10,000, using both hardcopy output and interactive screen display. A large scale, the Detroit, Michigan scene shows evidence of an along line data slip every sixteenth line in TM channel 2. Very large scale products generated in false color using channels 1,3, and 4 should be very acceptable for interpretation at scales up to 1:50,000 and useful for change mapping probably up to scale 1:24,000. Striping visible in water bodies for both natural and color products indicates that the detector calibration is probably performing below preflight specification. For a set of 512 x 512 windows within the NE Arkansas scene, the variance-covariance matrices were computed and principal component analyses performed. Initial analysis suggests that the shortwave infrared TM 5 and 6 channels are a highly significant data source. The thermal channel (TM 7) shows negative correlation with TM 1 and 4.

  16. Multi-wavelength speckle reduction for laser pico-projectors using diffractive optics

    NASA Astrophysics Data System (ADS)

    Thomas, Weston H.

    Personal electronic devices, such as cell phones and tablets, continue to decrease in size while the number of features and add-ons keep increasing. One particular feature of great interest is an integrated projector system. Laser pico-projectors have been considered, but the technology has not been developed enough to warrant integration. With new advancements in diode technology and MEMS devices, laser-based projection is currently being advanced for pico-projectors. A primary problem encountered when using a pico-projector is coherent interference known as speckle. Laser speckle can lead to eye irritation and headaches after prolonged viewing. Diffractive optical elements known as diffusers have been examined as a means to lower speckle contrast. Diffusers are often rotated to achieve temporal averaging of the spatial phase pattern provided by diffuser surface. While diffusers are unable to completely eliminate speckle, they can be utilized to decrease the resultant contrast to provide a more visually acceptable image. This dissertation measures the reduction in speckle contrast achievable through the use of diffractive diffusers. A theoretical Fourier optics model is used to provide the diffuser's stationary and in-motion performance in terms of the resultant contrast level. Contrast measurements of two diffractive diffusers are calculated theoretically and compared with experimental results. In addition, a novel binary diffuser design based on Hadamard matrices will be presented. Using two static in-line Hadamard diffusers eliminates the need for rotation or vibration of the diffuser for temporal averaging. Two Hadamard diffusers were fabricated and contrast values were subsequently measured, showing good agreement with theory and simulated values. Monochromatic speckle contrast values of 0.40 were achieved using the Hadamard diffusers. Finally, color laser projection devices require the use of red, green, and blue laser sources; therefore, using a monochromatic diffractive diffuser may not optimal for color speckle contrast reduction. A simulation of the Hadamard diffusers is conducted to determine the optimum spacing between the two diffusers for polychromatic speckle reduction. Experimental measured results are presented using the optimal spacing of Hadamard diffusers for RGB color speckle reduction, showing 60% reduction in contrast.

  17. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    PubMed

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.

  18. Speckle perception and disturbance limit in laser based projectors

    NASA Astrophysics Data System (ADS)

    Verschaffelt, Guy; Roelandt, Stijn; Meuret, Youri; Van den Broeck, Wendy; Kilpi, Katriina; Lievens, Bram; Jacobs, An; Janssens, Peter; Thienpont, Hugo

    2016-04-01

    We investigate the level of speckle that can be tolerated in a laser cinema projector. For this purpose, we equipped a movie theatre room with a prototype laser projector. A group of 186 participants was gathered to evaluate the speckle perception of several, short movie trailers in a subjective `Quality of Experience' experiment. This study is important as the introduction of lasers in projection systems has been hampered by the presence of speckle in projected images. We identify a speckle disturbance threshold by statistically analyzing the observers' responses for different values of the amount of speckle, which was monitored using a well-defined speckle measurement method. The analysis shows that the speckle perception of a human observer is not only dependent on the objectively measured amount of speckle, but it is also strongly influenced by the image content. As is also discussed in [Verschaffelt et al., Scientific Reports 5, art. nr. 14105, 2015] we find that, for moving images, the speckle becomes disturbing if the speckle contrast becomes larger than 6.9% for the red, 6.0% for the green, and 4.8% for the blue primary colors of the projector, whereas for still images the speckle detection threshold is about 3%. As we could not independently tune the speckle contrast of each of the primary colors, this speckle disturbance limit seems to be determined by the 6.9% speckle contrast of the red color as this primary color contains the largest amount of speckle. The speckle disturbance limit for movies thus turns out to be substantially larger than that for still images, and hence is easier to attain.

  19. Projector-based augmented reality for intuitive intraoperative guidance in image-guided 3D interstitial brachytherapy.

    PubMed

    Krempien, Robert; Hoppe, Harald; Kahrs, Lüder; Daeuber, Sascha; Schorr, Oliver; Eggers, Georg; Bischof, Marc; Munter, Marc W; Debus, Juergen; Harms, Wolfgang

    2008-03-01

    The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation.

  20. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

Top